• Home
  • Products
    • Fatigue Detection
    • Accessibility
    • Gaming
  • News
    • Products
    • Company News
    • Facebook
  • Blog
  • About us
  • Contact us
  • 0

Who We Are

We are Cortex Labs, an independent Australian technology company located in Melbourne, Victoria.

Our work is focused on creating software based off of cutting edge neurotechnology, using the human computer interface platform to enhance the lives of our users.

 At Cortex Labs we are excited and passionate about what we do. We consistently stay on top of trends, developing sophisticated and innovative software solutions, harnessing the potential of your mind.

 

 

What We Do

We create human computer interaction software, primarily focused in the areas of fatigue detection and accessibility. We have specifically chosen these applications because of the benefit that our products will provide to people, businesses, and industries.

Our fatigue detection software will help decrease the large number of accidents that are caused by fatigued drivers and operatoral workers every year. Our accessibility product provides assistance to disabled people offering new ways to communicate and reciprorate motor movement.

Cortex Labs is always continuing to look for new ideas and better our products. Browse our product selection to get a better idea about what we do.

Contact Us

News

NeuroTOUCH Accessibility Tool

Cortex Labs just finished developing another great BCI accessibility tool NeuroTOUCH, utilising eMotiv EPOC headset. We expect it to be officially released by June 2011. A new age of assistive...

NeuroLINK Gaming Product

Cortex Labs just finished developing another great BCI gaming product NeuroLINK, utilising eMotiv EPOC headset. We expect it to be officially released by June 2011. This gaming interface offers the...

Follow cortex_labs on Twitter

Subscribe to this RSS feed
Saturday, 16 April 2011 06:08

Cortex Labs Brain to Computer Interface Project

Cortex Labs Brain to Computer Interface Project

The developed control panel offers the opportunity for easier human computer interaction. It uses the signals measured by the headset to interpret user expressions in real-time. It provides a natural enhancement to computer interaction by allowing human expression to be mapped into computer commands.
Let's picture some scenarios that user can interact with the software:

  • A person can launch their email application with just a blink and then use accessibility tools come with the software to type their   message without having to reach to keyboard or mouse.
  • People can also play his/her favourite playlist by a smile.
  • The application can also provide an extra dimension in game interaction by allowing the game to respond to a player's emotions. Characters can transform in response to the player's feeling. It reads and interprets a player's conscious thoughts and intent.
  • Gamers can manipulate virtual objects using only the power of their thought!
  • etc,...

There are many ways you can think of how to use the apps. With this application, people with disabilities can perceive, understand, navigate, and interact with computer through their thoughts and expressions. Gaming experience will be improved by taking advantage of the app.

The application is primarily made up of two pieces: front end and back end. The front end is responsible for handling user interaction while the back end is responsible for interacting with the eMotiv SDK.
The front end was built using Winform technology powered by .NET framework. Therefore it can be run in any computer that has .NET 2.0 or higher installed. All the forms are developed using the Krypton toolkits.
The ribbon controls comes from open source Ribbon control library.
The front end allows user to manage profiles, add rules to certain profile.

The front end also provides some accessibilities tools such as on screen keyboard, Magnify, Dasher and the interactive mouse.

  •     The on-screen keyboard displays a virtual keyboard on computer monitor screen and allow user to type as if it is a real keyboard.
  •     The Magnify acts as a microscope that allows user to zoom in any area on screen so that they can see clearer.
  •     The Dasher allows user to type message by movement of the mouse.
  •     The interactive mouse allows user to move mouse cursor by moving the headset. User can choose how faster the cursor move relatively with the movement of the headset.

When the new profile is added, the software creates 2 files: one for storing training history and one for storing rules. Training history file is stored in binary format. The rules set is mapped into xml file.
Each profile can have as many rules as possible.
Each rule is made of 5 pieces: keys, send once, target application, conditions and enable.

With keys, user can select which action to be applied (sending keystrokes or sending mouse click).

There are two types of sending keystrokes:

  •     Send keystrokes: The user can select to send a series of characters. Each character is treated equally and is sent in sequence (and in the order they are specified by user).
  •     Send combination keys: The user can select to send key along with the modifiers, such as Alt + F4, Ctrl - Alt - Delete, etc... Alt, Ctrl, Windows key, Shift are modifier keys. Delete, F4, ... are key.

With mouse click, the user can choose to send left click or right click.

If the checkbox "send once" is clicked, the specied keys only be send to target application once. Otherwise the app will keep sending the keys to target application every 10 mili secs.
The user can select target application by using Target Application form. The form allows user to drag and drop the cursor to pick the target window. It uses Windows API to retreive the process ID, the Handle, name of the windows,...
There are two reasons why we need the Handle pointer (the process ID itself is not enough)

  •     Let's say if the user open multiple applications such as Notepad, each Notepad windows have the same process ID. The user only want to send keystrokes to one of the notepad windows, not to the others. If we only use the process ID then our application will send the keystrokes to all those Notepad windows. That is not what user ask for.
  •     Another case is. when we want to send keys to a particular section of the windows not to the whole windows. Let's pick Internet Explorer as an example, the user may want to send some keystrokes to address bar with a Push action, and send Alt-F4 to close that Internet Explorer windows with a Pull action.

The conditions form allows user to add conditions to be applied to rule, and how the condition can be triggered. The "Blink, Wink left, Wink right, neutral will only be triggered if the condition is "occurs" or 'does not occur". The other conditions can be triggers by specified the value of the range such as if user smiles for greater than 2 seconds than the rule can be applied. User can enable or disable condition by check/uncheck the enable checkbox.

The training form is made of 3 parts: a vertical progress bar to display power signals, a user control that simulate the user training, and a section that allow user to display training status and allow user to manage training actions. The stimulator is a WPF( Windows Presentation Foundation) control.

The back end will interact with the headset by calling functions from the eMotive SDK. All the low level processing such as signal power, emotion detections are done by the eMotive SDK. The back end acts as a level of abstraction so that the front end does not have to call the eMotive SDK directly.
When the back end receives a list of rules specified by the user from the front end, it processes each rule by calling the approriate function in the eMotive SDK.
In order to help dealing with sending keystrokes and mouse click, the back end use 3rd party library called WindowsInput.dll. This dll interact with the Windows API to handle low level keyboard and mouse hook. If one of the rule is to sending keystrokes or mouse then when the back end processes that rule it retrieve list of keystrokes and passes to approriate function in WindowsInput.dll. The WindowsInput then sending those keystrokes or mouse click to target application.
If the target application is not the one that is on focus then the keystrokes or mouse click just be ignored.

Inside the app, there is a training module that allow user to train himself/herself to be familiar with how to control his/her thought. Since each person has a slighly different pattern for doing certain actions. For example the power level of brain signals are varied from person to person. A person A can think of a Push action in a different waya than the person B, etc... Thus, the training module also allow the software to capture user expression pattern.
Each user will have his/her own user profile. That profile will stored all user training histories. The software then use that profile to filter brain signals and match signals to particular actions.
By default when a user profile profile is created, it adds cognitive neutral and cognitive push into profile. Cognitive neutral is very important because it is used as a grounding point, and it must be trained otherwise user thought and expression may not be captured correctly. Up to this point due to certain limitations in the eMotiv SDK, the software only allow each user to train maximum 4 cognitive actions (excluding the default cognitive neutral).

In order to communicate between the back end and front end, we use delegate, and safe invoke. Since the eMotiv SDK is run in diffrent thread with the UI, therefore when the back end need to send message back to front end, it need to be done using Thread Safe safe invoke and delegate.

Published in eMotiv
Read more...
Joomla SEO by AceSEF

Links

  • Home
  • News
  • Products
  • Affiliates
  • About us
  • Contact us

Contact Info

Cortex Labs
11, 75-79 Chetwynd Street
North Melbourne 3051
Tel: 1300 885 772
Email:  info@cortexlabs.com.au

About Us

At Cortex Labs we offer a variety of products focused on enhancing the user experience through applying the advancements of neurotechnology created by eMotiv and NeuroSky. We have focused our efforts in the creation of software in the categories of accessibility, gaming, and education.

Scroll To Top

©2011 CortexLabs - All Rights Reserved