We are Cortex Labs, an independent Australian technology company located in Melbourne, Victoria.
Our work is focused on creating software based off of cutting edge neurotechnology, using the human computer interface platform to enhance the lives of our users.
At Cortex Labs we are excited and passionate about what we do. We consistently stay on top of trends, developing sophisticated and innovative software solutions, harnessing the potential of your mind.
We create human computer interaction software, primarily focused in the areas of fatigue detection and accessibility. We have specifically chosen these applications because of the benefit that our products will provide to people, businesses, and industries.
Our fatigue detection software will help decrease the large number of accidents that are caused by fatigued drivers and operatoral workers every year. Our accessibility product provides assistance to disabled people offering new ways to communicate and reciprorate motor movement.
Cortex Labs is always continuing to look for new ideas and better our products. Browse our product selection to get a better idea about what we do.
Cortex Labs just finished developing another great BCI accessibility tool NeuroTOUCH, utilising eMotiv EPOC headset. We expect it to be officially released by June 2011. A new age of assistive...
Cortex Labs just finished developing another great BCI gaming product NeuroLINK, utilising eMotiv EPOC headset. We expect it to be officially released by June 2011. This gaming interface offers the...
The developed control panel offers the opportunity for easier human computer interaction. It uses the signals measured by the headset to interpret user expressions in real-time. It provides a natural enhancement to computer interaction by allowing human expression to be mapped into computer commands.
Let's picture some scenarios that user can interact with the software:
There are many ways you can think of how to use the apps. With this application, people with disabilities can perceive, understand, navigate, and interact with computer through their thoughts and expressions. Gaming experience will be improved by taking advantage of the app.
The application is primarily made up of two pieces: front end and back end. The front end is responsible for handling user interaction while the back end is responsible for interacting with the eMotiv SDK.
The front end was built using Winform technology powered by .NET framework. Therefore it can be run in any computer that has .NET 2.0 or higher installed. All the forms are developed using the Krypton toolkits.
The ribbon controls comes from open source Ribbon control library.
The front end allows user to manage profiles, add rules to certain profile.
The front end also provides some accessibilities tools such as on screen keyboard, Magnify, Dasher and the interactive mouse.
When the new profile is added, the software creates 2 files: one for storing training history and one for storing rules. Training history file is stored in binary format. The rules set is mapped into xml file.
Each profile can have as many rules as possible.
Each rule is made of 5 pieces: keys, send once, target application, conditions and enable.
With keys, user can select which action to be applied (sending keystrokes or sending mouse click).
There are two types of sending keystrokes:
With mouse click, the user can choose to send left click or right click.
If the checkbox "send once" is clicked, the specied keys only be send to target application once. Otherwise the app will keep sending the keys to target application every 10 mili secs.
The user can select target application by using Target Application form. The form allows user to drag and drop the cursor to pick the target window. It uses Windows API to retreive the process ID, the Handle, name of the windows,...
There are two reasons why we need the Handle pointer (the process ID itself is not enough)
The conditions form allows user to add conditions to be applied to rule, and how the condition can be triggered. The "Blink, Wink left, Wink right, neutral will only be triggered if the condition is "occurs" or 'does not occur". The other conditions can be triggers by specified the value of the range such as if user smiles for greater than 2 seconds than the rule can be applied. User can enable or disable condition by check/uncheck the enable checkbox.
The training form is made of 3 parts: a vertical progress bar to display power signals, a user control that simulate the user training, and a section that allow user to display training status and allow user to manage training actions. The stimulator is a WPF( Windows Presentation Foundation) control.
The back end will interact with the headset by calling functions from the eMotive SDK. All the low level processing such as signal power, emotion detections are done by the eMotive SDK. The back end acts as a level of abstraction so that the front end does not have to call the eMotive SDK directly.
When the back end receives a list of rules specified by the user from the front end, it processes each rule by calling the approriate function in the eMotive SDK.
In order to help dealing with sending keystrokes and mouse click, the back end use 3rd party library called WindowsInput.dll. This dll interact with the Windows API to handle low level keyboard and mouse hook. If one of the rule is to sending keystrokes or mouse then when the back end processes that rule it retrieve list of keystrokes and passes to approriate function in WindowsInput.dll. The WindowsInput then sending those keystrokes or mouse click to target application.
If the target application is not the one that is on focus then the keystrokes or mouse click just be ignored.
Inside the app, there is a training module that allow user to train himself/herself to be familiar with how to control his/her thought. Since each person has a slighly different pattern for doing certain actions. For example the power level of brain signals are varied from person to person. A person A can think of a Push action in a different waya than the person B, etc... Thus, the training module also allow the software to capture user expression pattern.
Each user will have his/her own user profile. That profile will stored all user training histories. The software then use that profile to filter brain signals and match signals to particular actions.
By default when a user profile profile is created, it adds cognitive neutral and cognitive push into profile. Cognitive neutral is very important because it is used as a grounding point, and it must be trained otherwise user thought and expression may not be captured correctly. Up to this point due to certain limitations in the eMotiv SDK, the software only allow each user to train maximum 4 cognitive actions (excluding the default cognitive neutral).
In order to communicate between the back end and front end, we use delegate, and safe invoke. Since the eMotiv SDK is run in diffrent thread with the UI, therefore when the back end need to send message back to front end, it need to be done using Thread Safe safe invoke and delegate.
Cortex Labs
11, 75-79 Chetwynd Street
North Melbourne 3051
Tel: 1300 885 772
Email: info@cortexlabs.com.au
At Cortex Labs we offer a variety of products focused on enhancing the user experience through applying the advancements of neurotechnology created by eMotiv and NeuroSky. We have focused our efforts in the creation of software in the categories of accessibility, gaming, and education.
©2011 CortexLabs - All Rights Reserved