MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has released a video of their ongoing work using input from muscle signals to control devices.
麻省理工學(xué)院的計(jì)算機(jī)科學(xué)和人工智能實(shí)驗(yàn)室發(fā)布了一個(gè)正在進(jìn)行的項(xiàng)目的視頻,利用肌肉信號(hào)輸入控制設(shè)備。
Their latest involves full and fine control of drones, using just hand and arm gestures to navigate through a series of rings.
最新進(jìn)展是對(duì)無(wú)人機(jī)進(jìn)行全面精細(xì)控制,只用手和胳膊的動(dòng)作就能控制它穿過(guò)一系列圓環(huán)。
This work is impressive not just because they’re using biofeedback to control the devices, instead of optical or other kinds of gesture recognition, but also because of how specific the controls can be, setting up a range of different potential applications for this kind of remote tech.
這個(gè)操作讓人印象深刻,不僅是因?yàn)樗蒙锓答伩刂圃O(shè)備,代替了光學(xué)或其他類型的手勢(shì)識(shí)別,還因?yàn)榭刂频木?xì)程度,為這種遠(yuǎn)程技術(shù)創(chuàng)造了一系列潛在的應(yīng)用方式。
This particular group of researchers has been looking at different applications for this tech, including its use in collaborative robotics for potential industrial applications.
這個(gè)特殊的研究小組一直在研究這項(xiàng)技術(shù)的不同應(yīng)用,包括它在潛在工業(yè)應(yīng)用的協(xié)作機(jī)器人中的使用。
Drone piloting is another area that could have big benefits in terms of real-world use, especially once you start to imagine entire flocks of these taking flight with a pilot provided a view of what they can see via VR.
無(wú)人駕駛是這項(xiàng)技術(shù)在現(xiàn)實(shí)中另一個(gè)用處很大的領(lǐng)域,你可以想象一下它們成群結(jié)隊(duì)飛行的場(chǎng)景,一個(gè)飛行員可以利用虛擬現(xiàn)實(shí)通過(guò)無(wú)人機(jī)的視野去觀察。
That could be a great way to do site surveying for construction, for example, or remote equipment inspection of offshore platforms and other infrastructure that’s hard for people to reach.
例如,這么好的方法可以用于測(cè)量施工現(xiàn)場(chǎng),或者對(duì)海上平臺(tái)和其他人們不容易到達(dá)的基礎(chǔ)設(shè)施進(jìn)行遠(yuǎn)程設(shè)備檢查。
Seamless robotic/human interaction is the ultimate goal of the team working on this tech, because just like how we intuit our own movements and ability to manipulate our environment most effectively, they believe the process should be as smooth when controlling and working with robots.
實(shí)現(xiàn)機(jī)器人與人的無(wú)縫交互是這個(gè)團(tuán)隊(duì)研究這項(xiàng)技術(shù)的終極目標(biāo),因?yàn)榫拖裎覀兡軕{直覺(jué)知道自己的動(dòng)作和最有效地控制環(huán)境的能力,他們認(rèn)為控制和使用機(jī)器人時(shí)應(yīng)該一樣順暢。
Thinking and doing are essentially happening in parallel when we interact with our environment, but when we act through the extension of machines or remote tools, there’s often something lost in translation that results in a steep learning curve and the requirement of lots of training.
我們與環(huán)境交互時(shí),思維和動(dòng)作應(yīng)該是同時(shí)進(jìn)行的,但我們通過(guò)機(jī)器或遠(yuǎn)程工具的擴(kuò)展操作時(shí),經(jīng)常會(huì)出現(xiàn)偏差,導(dǎo)致學(xué)習(xí)速度慢,需要進(jìn)行大量訓(xùn)練。
?
翻譯:菲菲