Scientific and technological developments, according to foreign media reports, recently, the US Massachusetts Institute of Technology (MIT) has developed a new headset called a "AlterEgo", it is possible to achieve "self-evident" state of mind reading. The user does not need to speak, it can identify what the user wants to say. And - the accuracy of identifying "silent language" messages is as high as 92%! It is said that with this magical black technology, people can give orders at certain times, such as choosing not to speak in public, thereby eliminating many embarrassments.
This headset is actually a computer interface that consists of wearable devices and related computing systems. The electrode in the device captures the neuromuscular signals of the jaw and face. These signals are triggered by internal speech - able to read what you want to say in your mind - but not the human eye. The signal is fed to a machine learning system that has been trained to associate a particular signal with a particular word.
The device also includes a pair of bone conduction headphones that transmit vibrations to the inner ear through the facial bones. Because they do not block the ear canal, the headset can allow the system to communicate information to the user without interrupting the conversation and without disturbing the user's listening experience.
The device is part of a complete silent computing system that allows the user to pose unpredictably and accept answers that are difficult to calculate. For example, in a researcher's experiment, subjects used the system to silently report an opponent's movement in a chess game and silently accept computer-recommended responses.
â€œThe motivation for doing so is to build an intelligent enhancement device,â€ said Arnav Kapur, a graduate student at the Massachusetts Institute of Technology Media Lab, who led the development of this new system. "Our idea is: We can have a more internal computing platform, it combines humans and computers in some ways, and it feels like we are aware of internal expansion."
"We basically can't live without our cell phones and digital devices," said Pattie Maes, a professor of media art and science. "But at the moment, the use of these devices is very destructive. If I want to see things related to the conversations I'm doing, I must find my phone and type the password and open an application, and enter some search keywords throughout Things require that I completely shift my attention from the surrounding environment and the people who talk to the mobile phone.So, my students and I have been experimenting with new form factors and new types of experiences for a long time, so that people can still give us from these devices. Benefit from all the wonderful knowledge and services provided, and at the same time you can better integrate into your surroundings.
Since the 19th century, internal speech expression and body-related views have existed, and strict investigations were conducted in the 1950s. One of the goals of the speed reading movement in the 1960s was to eliminate internal speech expressions, or to call it "subvocalization."
However, silent reading as a computer interface is largely unknown. The first step for researchers is to determine which locations on the face are the most reliable sources of neuromuscular signals. So they did some experimentation in which the same subject was asked to position the same series of words four times, each with 16 electrodes in different facial positions.
The researchers wrote code to analyze the resulting data and found that the signals from the seven specific electrode positions were always able to distinguish the secondary positioning words. In the conference paper, the researchers reported a prototype of a wearable silent voice interface that wraps around the back of the neck like a telephone headset and has tentacles-like curved appendages in seven positions on either side of the mouth. Touch the face and fit your chin.
However, in the current experiment, the researchers used a method with only four electrodes on the raft to obtain comparable results. Fortunately, the device worn was not as cumbersome as it used to be.
Once they selected the electrode location, the researchers began collecting data, which included some limited vocabulary calculation tasks - about 20 words each. One is an arithmetic operation in which the user will focus on locating large addition or multiplication problems; the other is a chess application, where the user uses a standard chess numbering system to report specific movements.
Then, for each application, they use neural networks to find the correlation between specific neuromuscular signals and specific words. Like most neural networks, researchers use networks that are arranged to simply handle node levels, each connected to several nodes in the upper and lower layers. The data is sent to the bottom, its nodes process it and pass them to the next level, the node processes it and passes them to the next level, and so on. The output of the final layer production is the result of some sorting tasks.
The basic configuration of the researcher's system includes a neural network that trains to identify unlocalized words from neuromuscular signals, but it can achieve customized â€œidentificationâ€ for specific users by retraining only the last two layers of the process.
How high is the "silent recognition" reliability?
The researchers conducted a usability study using a prototype wearable interface in which 10 subjects spent about 15 minutes to perform the input and then used 90 minutes to perform the calculations. In this study, the average transcription accuracy of the system was about 92%.
However, Kapoor said that the performance of the system should be improved with more training data that can be collected during daily use. Although he did not speculate on the data, he estimated that the accuracy of the trained system he used for the demonstration was higher than that of the usability study.
In the ongoing work, researchers are collecting a large amount of data on more elaborate conversations, hoping to build applications with a broader vocabulary.
Kapoor said: "I think we will have a full dialogue one day."
Thad Starner, a professor of mathematics at the Georgia Institute of Technology, pointed out that this "reading system" will have great potential in some specific operations. For example, in an environment such as a noisy airport, ground crews and other workers can better communicate signals. In some particularly quiet places it will also play a big role - if you can't speak loudly, such a device can't be more convenient. In addition, some disabled people who have vocal disturbances can also fully utilize the convenience brought by this technology. (Original title: Mind-reading is a reality! MIT has developed a headset-based device "AlterEgo" with 92% accuracy in information recognition)
Starch Ether,Hps Starch Ethers,Hydroxyprolyl Starch Ether,Starch Ether For Construction
Shanghai Na Long Tech Co., Ltd , https://www.na-long.com