Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
ConnectoMotion
Connectomic Graph Navigation and Exploration
Pedro Jose de Azambuja Fonseca Silva
Thesis to obtain the Master of Science Degree in
Information Systems and Computer Engineering
Supervisors: Prof. Sandra Pereira GamaProf. Hugo Alexandre Teixeira Duarte Ferreira
Examination Committee
Chairperson: Prof. Miguel Nuno Dias Alves Pupo CorreiaSupervisor: Prof. Sandra Pereira Gama
Members of the Committee: Prof. Joao Antonio Madeiras Pereira
November 2017
Acknowledgments
I would like first to thank my thesis advisor Professor Sandra Gama of the Instituto Superior Tecnico
for all the help and support she gave me during the development of this study, a big thank you for the
very best advisor, teacher and friend that was and is for me, without her I would not have been able to. I
would also like to thank to Professor Hugo Ferreira for providing the dataset.
I also thank to everyone that participated in my tests and spent some of their time to help me. You
are the base of my study.
Finally, I must express my very profound gratitude to my parents, to my parents in law, to my brother,
to my sister in law, to my grandparents, to my aunt, to my cousins and to my girlfriend, Ines Santos, for
providing me with unfailing support and continuous encouragement throughout my study and through
the process of researching and writing this thesis. This accomplishment would not have been possible
without them. Thank you.
Abstract
This document covers several points about the representation, interpretation, and navigation of node
graphs in 2D and 3D environment. The main goal of this work is to study and explore these points in
order to create a system that enables users to explore and gather information about the human brain
connectome in a tridimensional environment through the use of hand movements and gestures in midair.
To accomplish this, some systems and methods are analyzed in order to discuss the best approach to
use in the developing of the system. The solution resorted to Leap Motion in order to capture the
hand gestures of the user. With the elaboration of a set of operations with associated gestures, it was
possible to navigate and interact with the graph in order to retrieve relevant information. The set gestures
were derived from other studies and applications and in order to choose the most appropriated for the
interaction, it was made a survey with users. To validate the interaction with this interface user tests
were performed. During the tests the device proved to be easy and intuitive to interact with the majority
of users, this fact was noticed that some users were able to perform tasks with the device quickly and
with few errors, since they had no prior training with Leap Motion, thus achieving the desired goals of
the study.
Keywords
Human Brain Connectome; Midair gestures; Hand gestures; Leap Motion; Graph visualization; Graph
navigation.
iii
Resumo
Este documento aborda varios topicos relativos a representacao, interpretacao e navegacao de grafos
em ambiente 2D e 3D. O objetivo principal deste trabalho e estudar e explorar esses topicos, a fim
de criar um sistema que permita aos utilizadores explorar e reunir informacoes sobre o conectoma do
cerebro humano, num ambiente tridimensional, atraves do uso de movimentos e gestos das maos no ar.
Para a realizacao deste processo de exploracao e navegacao, alguns sistemas e metodos sao analisa-
dos de forma a discutir a melhor abordagem a utilizar no desenvolvimento do sistema. A solucao recor-
reu ao dispositivo Leap Motion para captacao dos gestos das maos do utilizador. Com a elaboracao
de um conjunto de operacoes com gestos associados, foi possıvel navegar e interagir com o grafo
de maneira a conseguir visualizar informacao relevante. Os gestos estabelecidos foram derivados de
outros estudos e aplicacoes e, para escolher o mais apropriado para a interacao, foi realizado um ques-
tionario com os utilizadores. Para validar a interacao com esta interface, foram realizados testes com
utilizadores. Durante os testes o dispositivo provou ser facil e intuitivo para interagir com a maioria dos
utilizadores, este facto foi notado quando alguns utilizadores executaram as tarefas de forma rapida e
com poucos erros, uma vez que os utilizadores nao apresentavam experiencia previa com Leap Motion,
alcancando assim os objetivos pretendidos com o estudo.
Palavras Chave
Connectoma cerebro humano; Gestos em pleno ar; Gestos com maos; Leap Motion; Visualizacao de
grafos; Navegacao em grafos.
v
Contents
1 Introduction 1
1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Related Work 5
2.1 Representing Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Navigating Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 ConnectoMotion: Connectomic Graph Navigation and Exploration 17
3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Capturing Hand Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Solution Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.3 Discusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5.1 Connectome Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5.2 Connectome Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Evaluation 33
4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.1 User Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Conclusions and Future Work 43
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
vii
A Documents survey 49
A.1 Protocol Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A.2 Consent form for survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
B Documents users test 52
B.1 Protocol user tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
B.2 Consent form ofr user tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
C Documents users test 56
C.1 Informative text in English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
C.2 User characterization in Portuguese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
C.3 User characterization in English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
C.4 Informative text of the second questionnaire of user tests . . . . . . . . . . . . . . . . . . 57
D Dataset File 59
viii
List of Figures
2.1 Force-directed layout approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 FORG3D [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 H3 [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Connectome Viewer Toolkit1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 GEOMI2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Wilmascope3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 GEOMI gesture detection [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 XIM [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9 NaturalMotion [5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1 Visual representation of the connection matrix. . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Hand Hierarchy4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Leap Motion Interaction Area5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Architecture scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Layout questionnaire of informative text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6 Layout questionnaire of characterization form. . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7 Layout questionnaire of question about rotate method. . . . . . . . . . . . . . . . . . . . . 25
3.8 Layout questionnaire of question about zoom method. . . . . . . . . . . . . . . . . . . . . 26
3.9 Layout questionnaire of question about pan method. . . . . . . . . . . . . . . . . . . . . . 26
3.10 Layout of the connectome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.11 Region name on the connectome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.12 Interface gesture name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.13 Pinch visual aid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 User performing tests with the interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Layout questionnaire of informative text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Layout questionnaire of characterization form. . . . . . . . . . . . . . . . . . . . . . . . . . 37
ix
4.4 Usability questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5 BoxPlot Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
x
List of Tables
2.1 Command Mapping [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 Users demographic information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Scores of users classification for gestures of navigation methods. . . . . . . . . . . . . . . 28
4.1 Users demographic information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Statistical results of recorded times (in seconds). . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Statistical results of user errors made in each task. . . . . . . . . . . . . . . . . . . . . . . 41
4.4 Statistical results of user usabilty classification. . . . . . . . . . . . . . . . . . . . . . . . . 41
xi
xii
Acronyms
AAL Automated Anatomical Labeling
IST Instituto Superior Tecnico
IBEB Instituto de Biofısica e Engenharia Biomedica
xiii
xiv
1Introduction
Contents
1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1
2
Nowadays, people have to work with large datasets in many different fields of work or study. Datasets
are very large collections of related, discrete items of data that can be accessed individually or in com-
bination or managed as a whole entity. To be able to work and understand these vast collections of data
we must be able to represent and explore them [6].
An effective method for representing large amounts of data is through the use of networks, also
known as graph structures [6]. This kind of structures can easily represent relations between objects, by
representing objects as nodes and associations as edges. However, the complexity of these networks is
huge and can imply a lot of cluttering. One way to reduce this effect is to use a three-dimensional space
environment.
The representation of a graph in 3D, in certain conditions, has proved to improve the visualization
of certain information [7]. This improvement can be noticed in cases where we have information about
shapes and spatial positions associated to each node and it is relevant to understand what we are trying
to visualize, an effective representation is crucial.
With this approach we answer the question of how to represent these datasets, but the question
of how to explore it remains to be answered. A possible approach would be the exploration using a
mouse, but recent studies showed that the use of natural hand gestures to manipulate tridimensional
environments proved to be a better way to explore in such environments compared to the exploration
through the mouse [8], since it is a more direct physical analog to everyday experience.
This study is a collaboration between Instituto Superior Tecnico (IST) and Instituto de Biofısica e
Engenharia Biomedica (IBEB) in order to try to solve the problem of visualizing and navigating a large
dataset that is the human brain connectome throw the use of hand gestures because the researchers in
the field of neuroscience feels the need for new ways to explore this data.
1.1 Goals
The goal of this work is to enable a meaningful representation and navigation of the human brain
connectome by using hand gestures, allowing the user to explore it and find relevant information.
In order to do so, it is important to study ways to both:
• Represent the brain connectome;
• Allow the navigation through the detection of hand gestures and movements.
Representation of the Connectome
The visualization of the connectome must be able to represent the shape, the different regions of the
brain and also each individual part that constitutes it (neurons and synapses). All these elements contain
3
atributes and relations that must be represented to the user. All this information is vital to understand
the human brain. One aspect to keep in mind is that the regions are composed by several neurons
and these are connected to each other by synapses. Another aspect is the relations that exist between
regions, these relations are the connections that exist between two neurons that belong to each one of
the two regions. To do so we can represent the connectome as a network, where the neurons are nodes
and the synapses are edges. Using a 3D environment we can preserve the shape of the brain by placing
neurons in the exact place where they are found in the brain.
Navigation through the use of hand gestures After having a layout of the connectome, it is important
to have a way to navigate through data in order to retrieve relevant information. Since it will be used a 3D
space to represent the connectome it is possible to use hand gestures to manipulate the visualisation.
The movements and gestures we can make with our hands are already associated with the manipulation
in a volumn environment [9]. By capturing these gestures and movements we can superimpose a view of
the user hand in the envionment making a direct interaction with the nodes, but we need to define a set
of operations to be able to manipulate the data and associate a gesture to each one. These operations
must be easy for the user to learn and execute in order to create a smooth and quick exploration of the
information.
1.2 Document Structure
This document is organized as follows. Section 2 will cover several related works, where they are
divided into two topics. The first topic will approach methods to represent graphs in a tridimensional
environment. The second topic will approach methods to navigate and manipulate graphs. In section
3 it is presented the solution for the main goal of this study, where it is described the data, the device,
and interface used in the system. Section 4 describes the methods that were used to evaluate the
performance of the developed system and also the statistical analysis of users tests. Finally, in section
5 it is discussed an overview of the conclusions and future work.
4
2Related Work
Contents
2.1 Representing Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Navigating Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5
6
The large amounts of information in which the brain connectome consists follow the characteristics of
graph structures, which represents relationships between objects through a node-links representation.
The amounts of associated clutter [6] leads to the need of creating a set of mechanisms that allow
effective data interpretation, not only consisting of ways in which to represent the aforementioned data,
but also their navigation, several research works have been done.
2.1 Representing Network Data
This section discusses different ways of representing the graph structure of the connectome. The layout
must be carefully chosen in order to maintain the correct and realistic interpretation of the information,
also the representation must be appealing and easy to understand. In this section a short overview of
different layouts used by 3D graph visualization is presented.
One of the most widely used layout for node-link networks is the Force-directed Placement. In
this approach nodes are positioned according to a simulation of physical forces where each node is
repelled from the other nodes and the edges act like springs that bring closer the nodes that this link
is connecting. There are a lot of implementations with known laws of physics like using Hooke’s law of
elasticity for edges and Coulomb’s (electrostatic interaction) law for nodes [1,10]. Figure 2.1 depicts the
nodes where a positive charge is applied in order to repel the other nodes.
Figure 2.1: Force-directed layout approach
One feature of this layout is that at the beginning all nodes are positioned randomly and then itera-
tively the positions are recalculated according to the forces that are being applied on each node at the
given moment until balance is achieved. Another feature is the fact that this layout minimizes node over-
lapping and edge crossing. These are visual artifacts that reduce the understanding of the displacement
of the graph. It is a very easy approach to implement in a graph visualization system. One system that
uses this method is FORG3D [1].
7
FORG3D is an open source software program for visualization of network data using three-dimensional
force-directed graphs. It was developed to understand the biological phenomena behind some biology
data. The system interface allows the user to change visual properties including name, size, color, vis-
ibility, node shape and edge direction, as well as physical properties including mass and charge for the
nodes and spring constant for the edges. Another feature of the system is the possibility to drag nodes
to change their position, after which all positions are recalculated until it reaches a state where none of
the nodes needs to change is position.
When the simulation starts, the nodes and edges start moving according to their physical properties.
Edges are assigned with spring constant and nodes with mass and electric constants. Spring simulation
is based on Hooke’s law of elasticity where spring constant determines the strength of the connection
and nodes are simulated as electrically charged particles that repel each other. The repelling force is
determined by the assigned mass and electric constant and the force calculation is based on Coulomb’s
law.
One disadvantage of this method is related with the size of the graphs: when the graph grows to
a certain size the distance between the nodes do not accurately represents the connection strengths
between them. Other disadvantage with large networks is that it can never reach stability.
Figure 2.2: FORG3D [1]
Another layout used in node-link network representation is the Hyperbolic Space Approach [11].
This layout provides a way to display a graph with local details and a global context. The graph is
positioned inside a sphere where the distance from the center to the boundary increases exponentially.
With the growth of these distances it is possible to represent large graphs inside a small volume.
8
One feature of this approach is that by selecting a node it is positioned in the middle of the space and
information about that particular node is displayed, the closest nodes connected to it and it is possible to
understand where the node is in the global context of graph, because the whole graph is always visible.
The H3 [2] was created by Tamara Munzner and it implements this approach.
H3 is a system that is capable of representing large directed graphs in a 3D Hyperbolic space. The
purpose of this system is to enable the representation of structures much larger than the ones that the
traditional techniques were able to represent.
This system takes advantage of the fact that the volume of 3D hyperbolic space increases exponen-
tially compared to the volume of 3D euclidean space that increases geometrically. In order to minimize
the problem of visual clutter, that can interfere with the gathering of information, they used a Focus-plus-
Context view that consists of enabling the user to see the object of primary interest presented in full
detail while at the same time getting an overview/impression of all the surrounding information.
One of the techniques used to manipulate large structures was to impose an hierarchy in the graph
by using domain-specific knowledge in order to find a spanning tree. Another technique was to use an
algorithm to make a cone tree layout that places the children of a node on a hemisphere around the
base of the cone. To reduce the complexity of the representation the authors also used the collapse and
expand technique to subtrees of the graph in order to select the information that is intended to visualize.
By using these techniques they were able to successfully represent structures with more than 20,000
nodes with minimal visual cluttering, enabling the user to find relevant information about the data.
The H3 layout was incorporated in H3Viewer library [12] created by the same author.
Figure 2.3: H3 [2]
9
Spacial fields are also used to represent node graphs, this layout has a particular characteristic when
it is applied to graphs: the nodes of the graph have an associated fixed position. This method allows to
represent cases where the position is an important factor to understand the graph. The Connectome
Viewer Toolkit [13] is a system that uses this layout in order to represent connectors in which each node
has a fixed position.
The Connectome Viewer Toolkit was created to respond to the need for software tools to visualize
and analyze the human brain connectome in neuroscience. This system uses the spacial technique in
order to represent the nodes of the connectome at the corresponding position in the human brain. It
is important to represent each element of the connectome at its position because the connectome was
obtained from neuroimages and to understand each part of the brain it is important to visualize to which
region this element belongs. Another reason why the position is important is related to the history of
neuroscience, the first discoveries were made by direct observation of the opening of the brain and each
brain function that was discovered was associated to a position in the brain.
This toolkit provides different ways to visualize and explore the connectome, like selecting, removing,
slicing and changing the scale of the representation in order to provide the user several ways to visualize
the information of each element that composes the brain.
One of the objectives of the development of this system was to provide the scientific community with
a reusable application that can be expanded in order to make new discoveries in the field of connectomic
representation and exploration and the field of neuroscience.
Figure 2.4: Connectome Viewer Toolkit1
10
Another layout used in node-link network representation is the 2.5D Layout. This layout provides
a way to display a graph that contains a hierarchic structure and represent it as series of layers in 3D
space. The nodes on the layers are usually ordered to reduce edge crossings between layers. One
advantage of this method is a clear hierarchical view. A system that use this approach is the GEOMI [3].
The GEOMI system is a visual software tool for the visualisation and analysis of large and complex
networkss uch as web-graphs, social networks, biological networks, sensor networks and transportation
networks. It was intended to visually explore networks and discover patterns and trends that can provide
critical insights. The system was developed by VALACON (Visualisation and Analysis of Large and
Complex Networks) project team members in the National ICT Australia (NICTA) IMAGEN program.
This system has several functionalities like: graph generation, network analysis, graph algorithms,
graph layout and interaction. The system provide several tools to each functionality, one of the tools
available of graph layout its the 2.5D layout technique.
The GEOMI system presents several ways of manipulation that will be mentioned in the section 2.2.
Figure 2.5: GEOMI2
1http://www.connectomics.org/viewer/screencasts/2http://sydney.edu.au/engineering/it/~shhong/comparison-radial.jpg
11
2.2 Navigating Network Data
This section focuses on methods that enable the navigation on a 3D environment. It is important to
notice the complexity and the degree of interaction that each method presents, it must be natural to
interact and provide a reasonable set of available operations.
One of the most widely used methods to interact and explore graphs is the manipulation with mouse
and keyboard, because they are two devices that are common and most people are already familiar with
them. The WilmaScope [14] is a software that uses these devices to interact with a selected graph.
WilmaScope is a Java3D open-source software which creates 3D animated visualization of graph
structures in real time. This application uses a force-directed layout (detailed in section 2.1) to represent
the graph and has a user-friendly interface for graph modifications.
The interface of the system provides the user with an easy way to manipulate and modify the graph,
like deleting and adding nodes, changing attributes and creating clusters. With a combination of move-
ments and key pressing of the mouse it is possible to navigate through the graph, by for example rotating,
zooming and translating. One disadvantage that this system has is the lack of automatic camera posi-
tioning on selected nodes or edges.
Figure 2.6: Wilmascope3
This tool proved to be useful in tasks of graph modification and visualization of small/medium graphs,
but in tasks of exploration it did not show promising results.
Another method to explore and manipulate graphs is through the use of head gestures. This manip-
ulation is possible because they use two cameras in order to capture the head movements of the user.
3http://wilma.sourceforge.net/doc/ExtendingWilmaScope_files/simpleXWGvis.png
12
Table 2.1: Command Mapping [3]
Head movement NavigationHorizontal sideways Mirrored X translationVertical up/down Mirrored Y translationHorizontal back/forth (towards screen) Zoom out/inNodding up/down X rotationTilting left/right Y rotation
This system pretends to take advantage of the natural way that humans perceives the world. This type
of manipulation is used in the GEOMI system [3].
The representation of this system was described in the section 2.1. The GEOMI system use two
types of interaction: classical mouse interaction and 3D head gestures. The last interaction type will be
the focus in this section. So, in order to simplify the system the authors specified five head movements
that allows users to interact with the system. The five head movements are described in table 2.1 where
they correlate them with the navigation actions.
Figure 2.7 describes in more detail how gestures are detected (b) and a example of a possible
gesture (a).
Figure 2.7: GEOMI gesture detection [3]
Another method used to explore and manipulate graphs in tridimensional spaces is the use of immer-
sive environments. These systems usually consist of a room around which several sensors are placed
in order to detect user movements and there is usually a 360º projection of the visualization. One dis-
advantage of this approach is the fact that given the large dimension of the room the user usually needs
another device to tell the system the exact positions and gestures of his hand. A system that explores
this method is the XIM [4] system.
XIM (eXperience Induction Machine) is a system that helps users navigate and understand large datasets
in a virtual reality environment through the use of natural gestures. It was based on a previous work done
by the same authors [15].
13
The system was built in a room with twenty five square meters (25m2), four projection screens, a
luminous interactive floor with a tracking system, sensors around the room that capture user movements
as well as a sound system and microphones. Additionally the system has sensing gloves to get forearm
orientation, finger position and electrodermal response. Figure 2.8 shows the system described.
Figure 2.8: XIM [4]
Through the development of the system, it was tested with participants using as example a large
dataset of the human brain connectome. For the tests twenty participants were recruited. All the par-
ticipants received a course to learn basic skills to explore and manipulate neuronal networks. After the
course participants were separated into two groups in order to perform the same tasks using different
tools: one group used the XIM system and the other one used the Connectome Viewer Toolkit [13].
After task completion each participant answered a questionnaire and performed a drawing exercise to
evaluate gained knowledge and visual memory.
The developing team of XIM concluded that with their system the users were able to extract more in-
formation about the network and their visual memory of the structure was also better: they remembered
each zone of the human brain connectome better. This results shows that natural gestures improve the
interaction with large datasets. The authors state that this system can be improved and used in several
areas of medical investigation. Later a system based on this one was developed BrainX3 [16]. This new
system is capable of determining the user’s cognitive workload and arousal and consequently adapt the
visualization and the interaction to avoid information overload and boost the exploration process.
A more recent method for manipulation in tridimensional spaces is the use of a small device, Leap
Motion4, that is capable of capturing the movements and gestures of users. This device makes it possible
4https://www.leapmotion.com
14
to use hand gestures to interact with graphs (or other tridimensional structures) without the need of a
more complex system to monitor the user interaction, making it possible to create a simpler system, like
a single laptop connected to the Leap Motion. A system that uses this devices to manipulate information
in a 3D space is the NaturalMotion [5].
NaturalMotion allows the user to visualize time-evolving graph data in a three-dimensional environ-
ment using natural hand gestures.
Data is represented as a Matrix Cube where one of the axis represents time and the other two
represent an adjacency matrix of the connections of the graph. This system uses Leap Motion to capture
hand gestures, like rolling, grabbing, poking and pinching. The use of natural gestures to manipulate
objects in a three-dimensional environment is intended to take advantage of how we humans interact
with objects in the real world.
To interact with the Matrix Cube the system has three available operations: rotating, slicing and
scaling. Rotating is done by grabbing and rotating a small cube that is placed at the bottom right corner.
This cube is a minimalistic representation of the orientation of the Matrix Cube, which gets the same
orientation of the small cube. Slicing is done by dipping the tip of the right index finger in the Matrix
Cube and it will show in detail the adjacency matrix in the selected time, as described in figure 2.9.
To undo these operations and return to the general view the user just needs to repeat the dipping of
the right index finger tip in the selected slice. Scaling is done by having the left index finger and thumb
simultaneously in contact with the Matrix Cube, contracting the fingers causes the scale to decrease and
expanding the fingers causes the scale to increase. Later they extended the system [17] and they added
one more gesture Swipping that is done by swip the right index finger in order to change the select time.
Figure 2.9: NaturalMotion [5]
The team behind NaturalMotion concluded that a visualization of time-involving graphs interacting
with natural hand gestures in a three-dimensional environment is more user-friendly.
15
2.3 Discussion
In this section several studies are presented that show different approaches to solve the problems of rep-
resenting and manipulating graphs in tridimensional environments. These studies present advantages
and disadvantages.
The systems presented in this document that focus on representation of graphs are: FORG3D [1],
H3 [2], Connectome Viewer Toolkit [13] and GEOMI [3]. All of them have the advantage of representing
large graphs, however the H3 also has the advantage of having a highlighted node and the visualization
changes in function of this node, the Connectome Viewer Toolkit has the advantage of representing the
nodes at a fixed position and the GEOMI has the advantage of representing graphs with hierarchical
structures.
In terms of navigations these systems are: WilmaScope [14], GEOMI [3], XIM [4] and NaturalMotion
[5]. The WilmaScope has the advantage of using a widely used device that is the mouse and also has a
set of operations that are easy to learn and use. The XIM and NaturalMotion has the advantage of using
hand gestures to interact with the graph, also the NaturalMotion has an easy setup.
The system developed tried to combine all the advantages of these systems, in order to create a
better functional system. One key feature that the system incorporated was the use of spatial field
to represent the connectome, as mentioned before in neuroscience it is important to represent each
element of the brain in their respective position which proved positive in neuroscience works [4, 13].
While the other systems [1–3] place the nodes at random positions that may be a disadvantage to
neuroscience studies.
Another feature that the work incorporated was the use of the Leap Motion device in order to capture
gestures and movements of the user, this kind of manipulation as shown to be a useful way to the user
interact with tridimensional graphs, with a simple setup, as shown by Clarke et al. [5], other system that
prove gesture manipulation have a positive impact is XIM system by Betella et al. [4]. The XIM system
although it has the advantage of using hand gestures it has two disadvantages: it needs a large room to
operate and it needs a glove to sense the user gestures. The GEOMI system present a disadvantage
that is the lake of sensibility in the navigation using head movements, this makes the interaction between
the user and the system more complicated and awkward.
16
3ConnectoMotion: Connectomic Graph
Navigation and Exploration
Contents
3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Capturing Hand Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Solution Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
17
18
The solution consists in the development of a system that is capable of capturing hand gestures to
navigate a meaningful representation of the human brain connectome. Through the analysis of gestures,
it is intended for the user to be enabled to execute different operations in order to explore data in a way
that is similar to the way people manipulate objects in the real world.
3.1 Dataset
The object of study that this work will focus on is the Human Brain Connectome. The Connectome
was created to offer a structural description of the Human Brain, which is composed by approximately
1010 neurons and 1013 connections just in the cortex. Several proposals were presented to make this
representation [18] and three approaches to the visualization of this information emerged: Microscale,
Macroscale and Mesoscale [18].
Microscale: at this scale the human brain connectome is represented at the level of single neurons
and synapses. This kind of detail leads to visualizations with a large degree of complexity and it can be
unnecessary or irrelevant.
Macroscale: this scale is concerned with the representation of brain regions and pathways. One ad-
vantage comparing with the microscale is that the neurons are well defined elements while there is no
universal consensus on the exact delimitation of the regions, which correspond to cluster of neurons.
Representing only the regions of the brain implies removing a lot of complexity to the visualization, di-
minishing the cluttering of the graph. At this level of representation it is only necessary to represent all
the connections between two regions as a single link (edge).
Mesoscale: this scale is somewhere between the microscale and macroscale. Instead of representing
large sections of neurons like the macroscale it represents anatomically or functionally distinct popula-
tions of neurons. The brain regions in this scale will be a set of small population of neurons. This scale
will be less complex compared to the microscale, but more informative than the macroscale.
The dataset used for this work was the Automated Anatomical Labeling (AAL) [19] that is a representa-
tion in Macroscale. The AAL is a digital human brain atlas with a labeled volume. It was the neuroscience
department of IBEB that provided this dataset, due to the partnership that exists between the IBEB and
IST to carry out this study. In the AAL each label represents a part/region of the human brain, it is
composed of 7 clusters of regions, 116 regions (nodes) and 534 connections (edges). This dataset is
the one that best suits this work since it allows a good visualization of the brain and the different regions
that form it without entering a level of detail that provokes cluttering in the visualization and thus making
difficult the perception of the connectome.
The dataset is stored in a text file that represents the connections matrix of the connectome. This
matrix stores the values of 0 and 1, these represent when two regions are connected to each one.
19
In Figure 3.1 is possible to visualise the connection matrix and in Appendix D is a sample of the
dataset file.
Figure 3.1: Visual representation of the connection matrix.
3.2 Capturing Hand Gestures
To detect and capture hand gestures and movements we used the Leap Motion device1. It is a system
that combines software and hardware to track the movement of hands and fingers with very low latency,
converting it into 3D input. The Leap Motion Controller is composed mainly by two cameras and three
infrared LEDs. This hardware can capture grayscale stereo images. To obtain depth information (a
depth map) advanced algorithms are used to analyze stereo images and generate corresponding depth
values. Other algorithms are used to identify and generate a model of the hand structure, this model is
an simplified “skeleton” of the hand, as shown in figure 3.2, with all the information about it, wether if it
is the right or left hand, as well as the layout of each finger.
With its wide angle lenses, the device has a large interaction space of 0.23m3, which takes the shape
of an inverted pyramid – the intersection of the binocular cameras’ fields of view. Previously, the Leap
Motion Controller’s viewing range was limited to roughly 60 cm above the device. With the Orion beta
software, this has been expanded to 80 cm. This range is limited by LED light propagation through
space, since it becomes much harder to infer your hand’s position in 3D beyond a certain distance. LED
1https://www.leapmotion.com
20
Figure 3.2: Hand Hierarchy2
light intensity is ultimately limited by the maximum current that can be drawn over the USB connection.
Figure 3.3 represents the interaction area.
Figure 3.3: Leap Motion Interaction Area3
A study using Leap Motion was made in order to determine how accurate the system was to recog-2http://blog.leapmotion.com/wp-content/uploads/2014/08/hand-hierarchy.png3http://blog.leapmotion.com/wp-content/uploads/2014/07/leap-motion-interaction-area.png
21
nize gestures [20] and they concluded that this device is suitable to use in gesture interacted systems.
3.3 Solution Architecture
Figure 3.4 presents the architecture of the solution that was developed in this project.
Unity was the platform used to analyze the user gestures provided by Leap Motion device, in order
to interact with the connectome. The dataset of the human brain connectome was stored in a text file,
that was supplied to Unity in order to be parsed and obtained the matrix that represent the graph of
the connectome. In this platform, with the Leap Motion asset, was made the interpretation of the user
gestures in order to do the gestures identification and create the manipulation of the connectome.
Figure 3.4: Architecture scheme
22
3.4 Survey
In order to decide how interaction/navigation was to be performed with the interface, it was necessary to
carry out a questionnaire with users. Thus, a survey was made of possible gestures for the interaction of
the user with the interface. Six gestures were selected, two for each interaction method, with the aim of
selecting only one for its realization. An explanatory protocol was created for this survey, which is found
in Appendix A.
3.4.1 Procedure
The participants for the tests were recruited through standard procedures, which included ads on so-
cial networks, by direct contact and participant were also asked to refer other people who were interested
in participating in the test.
The users always performed the survey with supervision, so it can clarify doubts about how the
proposed gestures for each method are performed.
In this survey, several users were contacted and the questionnaire was answered in a comfortable
space chosen by the user. For the conduction of the questionnaires was used the MacBook Pro 15 ”Mid
2014, Intel Core i7 2.2GHz, 16GB 1600MHz DDR3, Intel Iris Pro 1536MB.
Before performing the tests the user was shown Leap Motion device operating in the Leap Motion
Visualizer application and the user was told to interact with the device in order to understand how it
worked and also to understand the type of sensitivity of the device.
In the beginning of the survey, it was explained to the users the purpose of this study. Each participant
was asked to sign a consent form in order to allow the collection of their personal information. The
consent form is present in Appendix A.
This test consisted of a questionnaire on a web-browser form running on a local server where users
were asked to select a gesture from two that best suited each interaction method of the interface. The
interface was developed in HTML, CSS and JavaScript.
At the beginning of the test, the user was presented with small informative text describing the purpose
of the study and a description of the questionnaire to which they will respond (Figure 3.5). The text was
written in Portuguese since the target audience for these surveys were Portuguese citizens. The english
version of the informative text is presented in Appendix C.1.
After reading this informative text, the user would find a small characterization form (which does
not compromise the anonymity of the user), and later, along with the results obtained from the tests,
make a characterization of the users who participated in these survey (Figure 3.6). Appendix C describs
the fields of the user characterization, in Appendix C.2 is the portuguese version of the fields and in
Appendix C.3 is the english version of the fields.
23
Figure 3.5: Layout questionnaire of informative text.
Figure 3.6: Layout questionnaire of characterization form.
After completing the form, a question was asked for each method and the two respective gestures for
the users to select the one that best suits their method. Each question related to a method of interaction
24
with the interface of this study: rotate, zoom and pan. The translation of the questions asked, related to
the interaction method, is given in Appendix C.
It was started by presenting the Rotate method and asked the following question: ”What gesture do
you consider most appropriate for Rotate action?”. Following the question is presented a rating scale
with an even number of options in which the user expresses how much he prefers one gesture over
another. On this scale, an illustrative animation of the gestures given as an option was presented at
each end. For rotation users had to choose between the gesture Pinch with both hands or gesture
Swipe with one hand, as shown in figure 3.7.
Figure 3.7: Layout questionnaire of question about rotate method.
In older versions of the Leap Motion gesture package Pinch and Swipe gestures were already im-
plemented and for this reason were gestures used in various tasks and were also easy to performe.
The swipe gesture was then considered for this work as a possible gesture of execution of the method
rotation. An analogy was also made to this work, with the gesture of rotation in the Kinetic system that
used the arms stretched to begin its execution and according to the direction and direction of the arms
this applied a rotation to the object of the visualisation, thus taking into account the limitations of the
Leap Motion the gesture was applied using the Pinch as the beginning of the gesture execution and
according to the direction and direction of the movement of the hands the rotation was applied to the
object of visualisation. By making another analogy to the manipulation of interfaces on touch screens it
is possible to observe the use of two-finger contacting the screen and depending on the movement on a
circular path it is possible to apply the rotation method, thus similarly the gesture with Pinch translates
the contact with the screen and rotation is carried out according to the hands movement.
Regarding the zoom method, the same question was asked as for the rotation method, with the
difference that it was related to the zoom action, ”Which gesture do you consider most appropriate for
performing the Zoom action?”. Analogously to the previous method also shown a scale, with an even
number of options, in which at each end an animation illustrative of the execution of the gestures was
arranged. For this method the users had to choose between the gesture Pinch with both hands or
gesture Grab with one hand, as represented in figure 3.8.
25
Figure 3.8: Layout questionnaire of question about zoom method.
Pinch with both hands is a gesture that is analogous to the gesture used to zoom in on touchscreen
interfaces, the gesture on the touch screen consists of using two fingers in contact with the screen and
depending on whether the fingers approach or move away from each other it performs zoom, performing
Pinch on both hands serves as a trigger of the gesture and depending on whether the hands approach
or move away from each other it performs zoom on the interface. Grabbing with one hand is an analogy
to the interaction one person has with an object when a person wants to observe an object with better
detail a natural action is to grab and approach to himself, so this gesture consists of performing Grab
action (making a fist) and move the hand forward and backward. When the hand moves forward it is
zoom out and when you move backwards it is zoom in, ending when you stop making Grab.
Finally, the last method explored in this study was the pan method, exactly with the same structure
as the previous methods were asked the following question: ”Which is the gesture that you consider
most appropriate for Pan’s (Translation) action?” For this method, the users had to choose between the
gesture Grab with one hand or gesture Pointing with one hand, as represented in figure 3.9.
Figure 3.9: Layout questionnaire of question about pan method.
Grabbing with one hand is an analogy to the interaction that has to object when a person wants
to move an object a natural action is to grab and change their position, so this gesture consists of
performing Grab action (make a fist) and move the hand up, down, left, and right. When the hand moves
26
up and down a vertical translation is performed, when the hand moves to the left and to the right it is
performed, ending when you stop taking Grab. Pointing with one hand is analogous to the movement
of a cursor using Leap Motion studied at work of Sabir and his team [21], so when in one direction a
translation is performed in the same direction as if it were to move an object in the direction of the cursor.
In the end, a message of appreciation was presented to the users for taking part in this study and
having spent some of their time, and the form is submitted. As a way of thanks and compensation for
the survey were given a few sweets to the users.
3.4.2 Results
Twenty six users participated in this study. At the beginning users were asked to answer a small char-
acterization form, table 3.1 shows the information about the users who participated in this test.
Table 3.1: Users demographic information.
Gender Male 65.38%
Female 34.62%
Age range (years) 11-20 53.84%
21-30 11.54%
31-40 0.00%
41-50 23.08%
51-60 11.54%
Nationality Portuguese 100%
Country of residence Portugal 100%
Education Degree High School 61.54%
Bachelor (BsC) 23.08%
Master (MsC) 15.38%
Frequency of use of gestural interfaces Every day 100%
This survey aimed to understand which gestures users think are best suited for interaction with the
interface.
In this study, there are three navigation methods in the interface: rotate, zoom and pan. For each
method was asked to the user to choose between two different gestures (described in the previous
section).
Table 3.2 shows the scores obtained from the users classification in each method of interaction.
27
Table 3.2: Scores of users classification for gestures of navigation methods.
Navigation Methods Scores
Rotate -13
Zoom -36
Pan -5
3.4.3 Discusion
These scores were derived from the average ratings that users made of each method. When the score
is negative the gesture that the users think best for the method is the one that has negative weights, if
the score is positive the gesture chosen is the one that has positive weights.
Analyzing the scores obtained for each method we can conclude:
• For the method Rotation users preferred the Pinch gesture with both hands, as users considered
Pinch to be able to rotate the objects more accurately while Swipe, although it seemed more
intuitive to execute, did not produce the same level of precision.
• For the Zoom method users preferred the Pinch gesture with both hands, they think that this
gesture is the best to perform this action since it is a gesture simpler and easier to manipulate and
associated with the gesture often used in touch devices.
• Regarding the method Pan, users preferred the Fist gesture because they associate this gesture
with the action of grabbing and dragging an object and by the ease of execution of the gesture.
However, the users had some indecision in the choice of this gesture.
3.5 Interface
The objective of this study was to create an application that allowed the interaction through gestures
captured by Leap Motion and interpreted by the application, as such was used the Unity platform to
develop this interface. Unity is a cross-platform game engine developed by Unity Technologies, which
is primarily used to develop video games and simulations. Unity was used in this study to facilitate the
creation of the connectome 3D model since the focus of this study is not the three-dimensional modeling
of the brain but the user-interface interaction. Another important factor was the fact that there is already
an asset to integrate the communication between the Unity program and the Leap Motion device, Leap
Motion CoreAsset. The version of Unity used to develop the study was version 5.6. In this way Unity
was used in this study to create the interface, combining with the Leap Motion Core Asset that allowed
Unity to receive the data provided by the Leap Motion device.
28
The development of this application was divided into two key features: Connectome Representation
and Connectome Navigation.
3.5.1 Connectome Representation
The first phase consisted on the graphical representation of the human brain’s connectome, since the
dataset provided contained the 3D representation of the various regions of the brain, so it was necessary
to represent this model through a graph, in which the regions of the brain would be the nodes and links
that existed between the regions would be represented as edges. The matrix for the graph was obtained
from the parsing of the text file, this file contained 116 lines and in each line that was 116 values that
represented if that was a connection between two regions. For the connection representation, it was
used Unity primitives to represent the nodes and edges. Each node was represented by spheres and
the edges were represented as cylinders in which their position was determined by the midpoint between
the nodes which it was attached and then a scale transformation was applied so that the height of the
cylinder was equal to the distance between the nodes. Figure 3.10 represents the connectome obtained
from this operation.
Figure 3.10: Layout of the connectome.
Also, the Billboard technique was used to show the name of the region to which the node corresponds
as can be seen in figure 3.11. The Billboard technique consists of creating a sprite that is always facing
the screen no matter which direction it is looked at from.
3.5.2 Connectome Navigation
In the second phase, after the representation of the connectome in 3D, the implementation of the ges-
ture interpreter of gestures captured by the Leap Motion device was performed. It was at this stage that
29
Figure 3.11: Region name on the connectome.
it was developed the gestures for the realization of the Rotation, Pan, Zoom, Selection and Expansion
methods. For the creation of this interpreter through the Unity platform was used C#. This interpreter
consisted of capturing and interpreting the gestures coming from Leap Motion, in which analyzing the
positions of the various fingers of the hand, whether the fingers are stretched or not, and according to
the sequence of hand states a gesture was performed that best suited for this sequence.
The gestures implemented for the application based on the results obtained on the survey are the
following:
• Rotation: consists of performing a Pinch on both hands. If the hands move in depth, each hand
in opposite directions, the object is rotated on the y-axis. If the hands move in height, each in
opposite directions, the object is rotated on the z-axis;
• Zoom: consists of a Pinch (join the tips of the index and the thumb) in both hands. If the hands
move away from each other along the x-axis, a Zoom In is produced, otherwise, the hands will
zoom in on each other;
• Pan: consists of closing the hand (making a fist) and depending on whether the hand moves
horizontally or vertically the object moves in the same way;
• Selection: consists of having only the right index finger stretched and touch a node in order to
select or deselect it;
• Expansion: consists of having the right index finger and the middle finger stretched, and touch a
node in order to show the region or touch a region in order to hide it and show it again has a node.
30
The gestures were characterized by having three types of states: a start state, an intermediate state,
and a termination state.
The rotation was composed of a start state consisting of performing Pinch on both hands by inter-
mediate states in which the user held the Pinch in both hands but moved the hands along the y-axis and
or along the z-axis, the state termination was achieved when the user stopped performing the Pinch in
one or both hands.
Zoom consisted of a start state consisting of performing Pinch on both hands, by intermediate states
where the user kept the Pinch in both hands but moved the hands along the x-axis, the termination state
was reached when the Pinch in one or both hands.
Pan consisted of a start state consisting of performing Fist with the right hand, by intermediate states
in which the user maintained the Fist but moved the hand along the x-axis and along the y-axis, the
termination state was reached when the user stopped performing Fist.
Selection was composed of a start state consisting of having only index finger in the right hand
stretched by intermediate states in which the user kept index finger stretched but moved the hand
through the graphical environment and when contact with a node was detected this was selected/dese-
lected, the termination state was reached when the user ceased to have the index finger stretched.
Expansion consisted of an initial state consisting of having index finger and middle finger in the right
hand stretched by intermediate states in which the user kept the same fingers stretched but moved
the hand through the graphical environment and when contact with a node or a region was expand-
ed/collapsed as the case, the termination state was reached when the user no longer had the fingers
stretched.
In order to assist the user in performing the gestures, a text with the name of the gesture that the
user is executing is shown in the upper left corner of the interface, as shown in figure 3.12 so that the
user receives feedback on the state of the system.
Another visual aid was used when the user performs gestures with Pinch, a visual effect is repre-
sented next to the contact between the index finger and the thumb in order to let the user know that the
Pinch is being performed, in figure 3.13 we see this visual effect.
31
Figure 3.12: Interface gesture name.
Figure 3.13: Pinch visual aid.
32
4Evaluation
Contents
4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
33
34
To support the conclusion of this study user tests were made, in order to collect data and information
about the performance of the user-interface interaction.
For this study was also carried out a questionnaire where users were asked to classify the interaction
with the interface and how well manipulation gestures fit the interaction with the interface.
The performance of user tests was important in the development of this study, it was through these
tests that was possible verify if the goals of this study were reached because this study has as goals the
creation of an interface that allows the visualization and navigation of the human brain connectome.
4.1 Procedure
This user tests had the objective to evaluate the gestures implemented in the application. Users per-
formed a set of tasks in order to realize how much the gestures are appropriate to the application and
the accomplishment of the tasks. An explanatory protocol was created for this test, which is found in
Appendix B.
The participants for the tests were recruited through standard procedures, which included ads on so-
cial networks, by direct contact and participant were also asked to refer other people who were interested
in participating in the test.
In this test, the users performed the test in a private office, where the environment was controlled in
order to avoid distractions by the users, figure 4.1 showns a user performing the tests.
Figure 4.1: User performing tests with the interface.
To carry out the test it was used a Desktop with a screen 29”, Intel Core i7 4.3GHz, 16GB 2333MHz
DDR4, GTX1070 8GB.
35
At the beginning of the test, the user was presented with a small informative text describing the
purpose of the study and a description of the questionnaire to which they answered (Figure 4.2). The
text was written in Portuguese since the target audience for these study were Portuguese citizens.
Appendix C, section C.4, describes the english version of the informative text.
Figure 4.2: Layout questionnaire of informative text.
Before performing the tests users were explained how interaction gestures are performed and users
were allowed to interact freely with the application in order to interface with the interface and to question
the gestures. As with the first test, users were also asked to complete a characterization form (Figure
4.3), which does not compromise the anonymity of the user, and later, along with the results obtained
from the tests, make a characterization of the users who participated in these second tests.
The gestures implemented in the application are the following:
• Rotation;
• Zoom;
• Pan;
• Selection;
• Expand.
The tasks to be performed were as follows, in this order:
1. Task 1 - Use the selection gesture to select a node that is connected to the preselected node;
36
Figure 4.3: Layout questionnaire of characterization form.
2. Task 2 - Use the rotation gesture to highlight the preselected node;
3. Task 3 - Use the zoom gesture to place the preselected node;
4. Task 4 - Use the pan gesture to place the preselected node in the center of the screen;
5. Task 5 - Use the expand gesture to view the region of the preselected node;
6. Task 6 - Use the various gestures to view the region of the preselected node;
7. Task 7- Use the various gestures to place the pre-selected node in the center of the screen, select
an adjacent node, and expand both regions.
During the accomplishment of the tasks, it was recorded the time it took to complete the task and the
number of errors committed by the user.
The errors consisted of:
• Select / Deselect a node that was not part of the task;
• Expand / Collapse a region that was not part of the task;
• Perform a gesture while another was intended.
37
After the tasks were performed the users were asked to answer a small questionnaire regarding the
usability of the interface and its interaction. There were 11 questions regarding the tasks performed
during the test and related to the user interface interaction experience.
The questions were also presented through an online questionnaire, similar to the user characteri-
zation form. The questions asked were as follows:
• Question 1 - How do you rate the performance of the 1st task?
• Question 2 - How do you rate the performance of the 2nd task?
• Question 3 - How do you rate the performance of the 3rd task?
• Question 4 - How do you rate the performance of the 4th task?
• Question 5 - How do you rate the performance of the 5th task?
• Question 6 - How do you rate the performance of the 6th task?
• Question 7 - How do you rate the performance of the 7th task?
• Question 8 - Was the Rotation gesture appropriate to the interaction?
• Question 9 - Was the Zoom gesture appropriate to the interaction?
• Question 10 - Was Pan’s gesture appropriate to the interaction?
• Question 11 - Was it easy to interact with the system?
In order to answer the first 7 questions, a semantic differential scale with 5 options was presented
and presented as ”Very Difficult” and ”Very Easy” extremes. For the last 4 questions, it was also to
present a semantic differential scale with 5 options and that presents as extremes ”Strongly Disagree”
and ”Strongly Agree”.
Figure 4.4 shows the usability questionnaire presented to the user.
4.2 Results
This section describes the results obtained in the performance of the users test.
4.2.1 User Characterization
Twenty users participated in this tests. At the beginning of each test, users were asked to answer a small
characterization form, table 4.1 shows the information about the users who participated in this test.
38
Figure 4.4: Usability questionnaire
Table 4.1: Users demographic information.
Gender Male 45%
Female 55%
Age range (years) 11-20 5%
21-30 80%
31-40 5%
41-50 0%
51-60 10%
Nationality Portuguese 100%
Country of residence Portugal 100%
Education Degree High School 61.54%
Bachelor (BsC) 23.08%
Master (MsC) 15.38%
Frequency of use of gestural interfaces Every day 100%
39
In this test the objective was to analyze the application performance in terms of manipulation by the
users, that is, to make the user-interface interaction analysis.
To this end, a list of tasks was created for the users to perform, so that it was possible to count
the performance times (in seconds) of each task, as well as the errors that the users made in their
performance. Table 4.2 shows the statistical results of these tests obtaind from processing the results
through the SPSS program. The SPSS program is a statistical software that was developed by IBM and
is used for statistical analysis in social science. This program was used in this study with the objective
of making the statistical analysis of the results obtained in the user tests.
Table 4.2: Statistical results of recorded times (in seconds).
Task Mean Median Std. Deviation Minimum Maximum
Task 1 4.57 3.60 2.08948 1.56 8.80
Task 2 9.04 7.42 5.21247 3.94 21.99
Task 3 9.13 8.38 4.76513 4.06 25.35
Task 4 8.67 7.18 3.58255 4.55 18.65
Task 5 5.69 4.76 2.80228 1.72 12.40
Task 6 18.61 19.04 7.90723 7.13 33.11
Task 7 25.14 20.99 10.26095 14.45 48.58
To analyze the variance of the times obtained for each task and to notice if there were outliers it was
necessary to create a boxplot, which is represented in figure 4.5 .
Figure 4.5: BoxPlot Time
During the execution of the tasks, the user was asked to perform these tests, they were making small
40
errors, described in the subsection ”User Tests for Interface”, which were annotated. Table 4.3 shows
the statistical results of the number of errors made by the users in each task.
Table 4.3: Statistical results of user errors made in each task.
Task Mean Median Std. Deviation Minimum Maximum
Task 1 0.00 0.00 0.00000 0.00 0.00
Task 2 0.10 0.00 0.44721 0.00 2.00
Task 3 0.00 0.00 0.00000 0.00 0.00
Task 4 0.00 0.00 0.00000 0.00 0.00
Task 5 0.00 0.00 0.00000 0.00 0.00
Task 6 0.55 0.00 0.88704 0.00 3.00
Task 7 1.14 2.00 1.14248 0.00 4.00
After the realization of the tasks proposed for this test, users were asked to answer a small question-
naire of usability of the interface, table 4.4 shows the results obtained.
Table 4.4: Statistical results of user usabilty classification.
Question Mean Median Std. Deviation Minimum Maximum
Question 1 5.00 5.00 0.00000 5.00 5.00
Question 2 4.95 5.00 0.22361 4.00 5.00
Question 3 5.00 5.00 0.00000 5.00 5.00
Question 4 5.00 5.00 0.00000 5.00 5.00
Question 5 5.00 5.00 0.00000 5.00 5.00
Question 6 4.80 5.00 0.41039 4.00 5.00
Question 7 4.45 4.50 0.16048 3.00 5.00
Question 8 4.30 4.00 0.57124 3.00 5.00
Question 9 5.00 5.00 0.00000 5.00 5.00
Question 10 5.00 5.00 0.00000 5.00 5.00
Question 11 4.10 4.00 0.55251 3.00 5.00
4.3 Discussion
In order to make an analysis of the times that the users took to perform the tasks Table ?? shows the
expected times to perform the tasks proposed to the users in the test, these times were obtained through
the analysis of each task and supported by the pilot tests.
Thus, making an analysis of the times spent by the users to carry out the tasks we can conclude:
• Task 1: All users were able to successfully complete the task within a reasonable time;
41
• Tasks 2, 3 and 4: In these tasks almost all users were able to perform the tasks within a reasonable
time, but the users that took more time were considered in the statistical analysis as outliers, in
this way if we exclude these outliers from the analysis we can conclude that these tasks waere
possible to perform with success in a reasonable time;
• Task 5: In this task, some users took a little longer to perform has can be seen in the box-plot in
figure 4.5. One possible reason for these values to be slightly higher may be due to the difficulty
of some users to have depth perception;
• Tasks 6 and 7: Although these tasks are more complex to be performed, all users were able
to perform within a resonable time, yet the tasks had fairly low median values compared to the
expected values.
Through the table 4.3 we can see that tasks 1, 3, 4 and 5 were not registered errors during their
executions, this was due to the fact that they are easier and intuitive tasks to execute. However in Task
2 there was a user who made two errors, this fact was due to this user being part of a higher age group
and in this task as was used the Rotation gesture which is a gesture a little more complex to execute.
In tasks 6 and 7, it was expected that there would be more errors, because they were more complex
tasks to perform, yet task 6 was well performed by the users, having obtained an average error of less
than 1 error. With regard to task 7, we can conclude that it was the most complicated task for the users
being that it was more elaborate than the previous task, however this task performed sufficiently because
it had an average of 1.14 errors.
By analyzing the responses given by the users to the usability questions we can verify, through the
average, that all the answers obtained a rating equal to or greater than 4. In this way, we can conclude
that the users found that it is an interface easy to interact and intuitive. As expected, users found the
more basic tasks easier to perform compared to tasks involving greater complexity, yet these same
tasks were also rated positively. Regarding the classification of the gestures, the users considered that
the gestures of Zoom and Pan were fit to the interaction since all the users assigned the maximum
classification. However, the Rotation gesture is a more complex gesture obtained a lower rating, but
nevertheless still obtained a positive rating of 4 in 5, the classification of this gesture is also justified by
the existence of more errors in the execution of this gesture of the which is relative to others.
It is also important to highlight the result obtained in the last question regarding the ease of interacting
with the interface and in this question users rated it 4 out of 5, which translates into an easy and intuitive
interface to interact.
By doing a general analysis of the results obtained from this test we can conclude that the interaction
with this interface was very positive since we have users to classify the interface with high usability, very
reduced times in the accomplishment of tasks and few or no errors committed during the interaction.
42
5Conclusions and Future Work
Contents
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
43
44
5.1 Conclusions
This study consisted of creating an interface that allowed the navigation, exploration and interaction
of the human brain’s connectome. Due to the complexity of this dataset, the need arose to explore
alternative ways of manipulating them.
Thus, it was hypothesized to use the Leap Motion device for interaction, this being a device that
allows interaction in three-dimensional environments and is a device that can be easily acquired due to
its affordable price.
During the tests with the users the device proved to be easy and intuitive to interact with the majority
of users, this fact was noticed that some users were able to perform tasks with the device quickly and
with few errors, since the users had no prior training with Leap Motion.
After performing the tests some users asked to continue interacting with the interface and it could be
observed that the users improved their performance very quickly.
The data obtained from the tests with users revealed that it is an interface that is easy to interact and
has achieved the desired objectives. Sometimes users showed difficulty in perceiving depth when they
performed the Selection or Expansion gesture because these gestures require touching the objects that
the users intended to interact, users had to stretch their hands a little more than they anticipated, thus
making a simple gesture to be a little more complex to perform.
A limitation that occurred in the interface was in relation to the rotation gesture since it did not allow
to rotate the connectome over the x-axis.
5.2 Future Work
For the continuation of this work some improvements can be made or new functionalities can be added
such as:
• Improve the accuracy of the detection of the gestures, using machine learning techniques;
• Explore an alternative gesture for the rotation gesture;
• Use pop-up menus to allow users to access more interaction options with the interface. These
menus appear when, for example, the left palm is facing upwards and a menu that can be selected
with the right hand is projected over this hand;
• Extend the interface with the inclusion of more information about the regions that are being inter-
acted.
45
46
Bibliography
[1] J. Paananen and G. Wong, “Forg3d: Force-directed 3d graph editor for visualization of integrated
genome scale data,” BMC Systems Biology, vol. 3, p. 26, 2009.
[2] T. Munzner, “H3: Laying out large directed graphs in 3d hyperbolic space,” in Information Visualiza-
tion, 1997. Proceedings., IEEE Symposium on. IEEE, 1997, pp. 2–10.
[3] A. Ahmed, T. Dwyer, M. Forster, X. Fu, J. Ho, S.-H. Hong, D. Koschutzki, C. Murray, N. S. Nikolov,
R. Taib et al., “Geomi: Geometry for maximum insight,” in International Symposium on Graph Draw-
ing. Springer, 2005, pp. 468–479.
[4] A. Betella, E. M. Bueno, W. Kongsantad, R. Zucca, X. D. Arsiwalla, P. Omedas, and P. F. Ver-
schure, “Understanding large network datasets through embodied interaction in virtual reality,” in
Proceedings of the 2014 Virtual Reality International Conference. ACM, 2014, p. 23.
[5] S. Clarke, N. Dass, and D. H. P. Chau, “Naturalmotion: Intuitive gesture control and visualization of
time-evolving graphs acm intelligent user interfaces 2015 research paper,” 2014.
[6] T. Munzner, Visualization Analysis and Design. CRC Press, 2014.
[7] K. C. Cox, S. G. Eick, and T. He, “3d geographic network displays,” ACM Sigmod Record, vol. 25,
no. 4, pp. 50–54, 1996.
[8] R. Kjeldsen and J. Kender, “Toward the use of gesture in traditional user interfaces,” in Automatic
Face and Gesture Recognition, 1996., Proceedings of the Second International Conference on.
IEEE, 1996, pp. 151–156.
[9] V. I. Pavlovic, R. Sharma, and T. S. Huang, “Visual interpretation of hand gestures for human-
computer interaction: A review,” IEEE Transactions on pattern analysis and machine intelligence,
vol. 19, no. 7, pp. 677–695, 1997.
[10] T. M. Fruchterman and E. M. Reingold, “Graph drawing by force-directed placement,” Software:
Practice and experience, vol. 21, no. 11, pp. 1129–1164, 1991.
47
[11] T. Munzner, “Exploring large graphs in 3d hyperbolic space,” IEEE Computer Graphics and Appli-
cations, vol. 18, no. 4, pp. 18–23, 1998.
[12] ——, “Drawing large graphs with h3viewer and site manager,” in International Symposium on Graph
Drawing. Springer, 1998, pp. 384–393.
[13] S. Gerhard, A. Daducci, A. Lemkaddem, R. Meuli, J.-P. Thiran, and P. Hagmann, “The connec-
tome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes,”
Frontiers in neuroinformatics, vol. 5, p. 3, 2011.
[14] A. Ahmed, T. Dwyer, C. Murray, L. Song, and Y. X. Wu, “Wilmascope graph visualisation,” in Infor-
mation Visualization, 2004. INFOVIS 2004. IEEE Symposium on. IEEE, 2004, pp. r4–r4.
[15] X. D. Arsiwalla, A. Betella, E. M. Bueno, P. Omedas, R. Zucca, and P. F. Verschure, “The dynamic
connectome: A tool for large-scale 3d reconstruction of brain activity in real-time.” in ECMS, 2013,
pp. 865–869.
[16] A. Betella, R. Cetnarski, R. Zucca, X. D. Arsiwalla, E. Martinez, P. Omedas, A. Mura, and P. F.
Verschure, “Brainx 3: embodied exploration of neural data,” in Proceedings of the 2014 Virtual
Reality International Conference. ACM, 2014, p. 37.
[17] S. Clarke, N. Dass, and D. H. P. Chau, “Naturalmotion: Exploring gesture controls for visualizing
time-evolving graphs.”
[18] O. Sporns, G. Tononi, and R. Kotter, “The human connectome: a structural description of the human
brain,” PLoS Comput Biol, vol. 1, no. 4, p. e42, 2005.
[19] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer,
and M. Joliot, “Automated anatomical labeling of activations in spm using a macroscopic anatomical
parcellation of the mni mri single-subject brain,” Neuroimage, vol. 15, no. 1, pp. 273–289, 2002.
[20] W. Lu, Z. Tong, and J. Chu, “Dynamic hand gesture recognition with leap motion controller,” IEEE
Signal Processing Letters, vol. 23, no. 9, pp. 1188–1192, 2016.
[21] K. Sabir, C. Stolte, B. Tabor, and S. I. O’Donoghue, “The molecular control toolkit: controlling 3d
molecular graphics via gesture and voice,” in Biological Data Visualization (BioVis), 2013 IEEE
Symposium on. IEEE, 2013, pp. 49–56.
48
ADocuments survey
A.1 Protocol Survey
Protocolo de survey
Proposta
Com o presente estudo pretendemos realizar uma analise dos gestos que melhor se adaptam
a interacao entre o utilizador e a interface. Este teste tera como finalidade a selecao dos gestos
necessarios para a manipulacao da interface ConnectoMotion, que se trata de uma aplicacao para
visualizar e interagir com o conectoma do cerebro humano atraves de gestos com as maos, utilizando
o LeapMotion para captacao dos mesmos. Para realizar esta analise sera pedido ao utilizador que
preencha um formulario onde indicara para uma determinada tarefa, de entre 2 gestos, o que melhor
se adequa para a relacao da mesma. E explicado ao utilizador o tipo de tarefa e sao demonstrados os
dois gestos possıveis para o desempenho da mesma. No fim sao analisadas as respostas dadas pelos
utilizadores e o gesto que for mais votado pelos utilizadores, para cada tarefa, sera definido como o
gesto para a realizacao da tarefa.
Selecao dos utilizadores
49
Os utilizadores para a realizacao dos testes serao recrutados atraves dos procedimentos padrao,
que inclui anuncios nas redes sociais e atraves de pedido direto a pessoas. Os utilizadores serao
qualquer pessoa interessada em participar neste estudo. A cada participante sera pedido que assine
um formulario de consentimento onde permite a recolha dos seus dados.
Apparatus
Equipamento A interacao utilizador interface sera realizada num MacBook Pro Retina Mid-2014 15
polegadas, com Processador 2,2 GHz Intel Core i7, 16 GB 1600 MHz DDR3 de RAM e Placa Grafica
Intel Iris Pro 1536 MB.
Procedimento
Inicialmente ira ser pedido aos utilizadores que assinem um formulario de consentimento. Assinado
o formulario de consentimento, e apresentado ao utilizador um pequeno texto com a explicacao da
sequencia de interacao que vai encontrar: “O questionario que se segue esta inserido no ambito da
minha tese “ConnectoMotion” na qual pretendo criar uma aplicacao para visualizar e interagir com o
conectoma do cerebro humano atraves de gestos com as maos, utilizando o LeapMotion. Este ques-
tionario encontra-se estruturado da seguinte forma: Pequeno formulario com algumas informacoes
basicas do utilizador (sem comprometer o anonimato). Uma sequencia de questionarios que utilizam
uma escala para perceber qual o gesto que melhor se adequa a determinada tarefa. A duracao deste
questionario e de aproximadamente 5 minutos. Desde ja agradeco toda a sua disponibilidade.”
Depois e pedido ao utilizador que preencha um pequeno formulario, no qual fornece algumas
informacoes pessoais, mas que nao comprometem o anonimato. De seguida e apresentada uma es-
cala para deliberar de entre dois gestos o que melhor se adequa para a realizacao de uma determinada
tarefa. Este esquema repete-se para 3 tarefas diferentes, nomeadamente rotacao, zoom e pan. No fim
e apresentada uma mensagem de agradecimento e o formulario e submetido
Compensacao
Os utilizadores irao receber, como forma de compensacao pela participacao neste teste, uns doces
(chocolates, gomas, etc).
Riscos e Benefıcios
Nao existe nenhum potencial risco nem benefıcio para os participantes.
50
A.2 Consent form for survey
51
BDocuments users test
B.1 Protocol user tests
Protocolo de testes
Proposta
Com o presente estudo pretendemos realizar uma analise dos gestos que melhor se adaptam
a interacao entre o utilizador e a interface. Este teste tera como finalidade a selecao dos gestos
necessarios para a manipulacao da interface ConnectoMotion, que se trata de uma aplicacao para
visualizar e interagir com o conectoma do cerebro humano atraves de gestos com as maos, utilizando o
LeapMotion para captacao dos mesmos. Para avaliar os gestos implementados na aplicacao sao feitos
testes em que os utilizadores terao de realizar um conjunto de tarefas de forma a conseguir perceber o
quanto os gestos se adequam a aplicacao e a realizacao das tarefas.
Selecao dos utilizadores
Os utilizadores para a realizacao dos testes serao recrutados atraves dos procedimentos padrao,
que inclui anuncios nas redes sociais e atraves de pedido direto a pessoas. Os utilizadores serao
qualquer pessoa interessada em participar neste estudo. A cada participante sera pedido que assine
52
um formulario de consentimento onde permite a recolha dos seus dados.
Apparatus
Equipamento A interacao utilizador interface sera realizada num MacBook Pro Retina Mid-2014 15
polegadas, com Processador 2,2 GHz Intel Core i7, 16 GB 1600 MHz DDR3 de RAM e Placa Grafica
Intel Iris Pro 1536 MB.
Procedimento
Inicialmente ira ser pedido aos utilizadores que assinem um formulario de consentimento. Assinado
o formulario de consentimento, e apresentado ao utilizador um pequeno texto com a explicacao da
sequencia de interacao que vai encontrar: “O questionario que se segue esta inserido no ambito da
minha tese “ConnectoMotion” na qual pretendo criar uma aplicacao para visualizar e interagir com o
conectoma do cerebro humano atraves de gestos com as maos, utilizando o LeapMotion. Este ques-
tionario encontra-se estruturado da seguinte forma: Pequeno formulario com algumas informacoes
basicas do utilizador (sem comprometer o anonimato). Uma sequencia de questionarios que utilizam
uma escala para perceber qual o gesto que melhor se adequa a determinada tarefa. A duracao deste
questionario e de aproximadamente 5 minutos. Desde ja agradeco toda a sua disponibilidade.”
Depois e pedido ao utilizador que preencha um pequeno formulario, no qual fornece algumas
informacoes pessoais, mas que nao comprometem o anonimato. De seguida e pedido ao utilizador que
realize um teste, que consiste num conjunto de tarefas que permitirao avaliar a interface e a interacao
utilizador-interface. As tarefas do teste a desempenhar comecam por ser tarefas que envolvem fun-
cionalidades basicas sendo que o grau de complexidade vai aumentado, terminado com tarefas mais
complexas e que implicam mais funcionalidades.
Os gestos implementados para a aplicacao foram os seguintes: Zoom: consiste em realizar um
“pinch”(juntar as pontas do indicador e do polegar) em ambas as maos. Se as maos afastarem-se
uma da outra ao longo do eixo x produz-se um Zoom In, caso contrario se as maos aproximarem-se
produz-se um Zoom Out. Rotacao: consiste em realizar um “pinch” em ambas as maos. Se as maos
se deslocarem em profundidade, cada uma em sentidos opostos, realiza-se uma rotacao do objecto
sobre o eixo y. Se as maos se deslocarem em altura, cada uma em sentidos opostos, realiza-se uma
rotacao sobre o eixo z. Pan: consiste em cerrar a mao (fazer um ponho) e consoante a mao se desloca
horizontalmente ou verticalmente o objecto desloca-se na mesma maneira. Seleccao: consiste em ter
so o indicador direito esticado e ao tocar num no este fica seleccionado ou desselecionado. Expandir:
convite em ter o indicador e o medio esticados, e ao tocar num no e mostrado a regiao que este no
representa.
As tarefas a desempenhar sao as seguintes, por esta ordem:
• 1. Usar o gesto de seleccao para selecionar um no que esteja ligado ao no pre-selecionado;
• 2. Usar o gesto de rotacao para colocar em destaque o no pre-selecionado;
53
• 3. Usar o gesto de zoom ate colocar o no pre-selecionado perceptıvel;
• 4. Usar o gesto de pan ate colocar o no pre-selecionado no centro do ecra;
• 5. Usar o gesto de expandir de modo a visualizar a regiao do no pre-selecionado;
• 6. Usar os diversos gestos d modo a visualizar a regiao do no pre-selecionado;
• 7. Usar os diversos gestos de modo a colocar o no pre-selecionado no centro do ecra, seleccionar
um no adjacente e expandir ambas as regioes;
De forma a proceder a avaliacao do desempenho das tarefas sera reportado o tempo que cada
utilizador demora a realizar cada tarefas e os erros que comete a realizar cada tarefa.
Compensacao
Os utilizadores irao receber, como forma de compensacao pela participacao neste teste, uns doces
(chocolates, gomas, etc).
Riscos e Benefıcios
Nao existe nenhum potencial risco nem benefıcio para os participantes.
54
B.2 Consent form ofr user tests
55
CDocuments users test
C.1 Informative text in English
The following questionnaire is within the scope of my thesis ”ConnectoMotion” in which I intend to
create an application to visualize and interact with the human brain connectome through hand gestures,
using LeapMotion.
This questionnaire is structured as follows:
• A small form with some basic user information (without compromising anonymity).
• A sequence of questionnaires that use a scale to perceive which gesture best suits a certain task.
The duration of this questionnaire is approximately 5 minutes.
Thank you in advance for your availability.
56
C.2 User characterization in Portuguese
• Idade - o utilizador escolhe o intervalo correspondente a sua idade (menos de 11 anos, 11-20
anos, 21-30 anos, 31-40 anos, 41-50 anos, 51-60 anos, mais de 60 anos);
• Genero - o utilizador escolhe entre Masculino ou Feminino;
• Nacionalidade - resposta aberta;
• Grau de Escolaridade - o utilizador pode escolher: ”1º Ciclo do Ensino Basico (4ºano)”, ”2º Ciclo
do Ensino Basico (6ºano)”, ”3º Ciclo do Ensino Basico (9ºano)”, ”Ensino Secundario (12ºano)”,
”Ensino profissional”, ”Licenciatura”, ”Mestrado” and ”Douturamento”.
• A frequencia com que o utilizador interage com interfaces gestuais (como leap motion, tablets,
smartphones) - o utilizador pode escolher: ”Nunca”, ”1 a 2 vezes por mes”, ”1 vez por semana”,
”2 a 3 vezes por semana”, ”4 a 5 vezes por semana” ou ”Todos os dias”.
C.3 User characterization in English
• Age - user selected the range that corresponded to his age (less than 11 years, 11-20 years, 21-30
years, 31-40 years, 41-50 years, 51-60 years, more than 60 years);
• Gender - the user could choose either masculine or feminine;
• Nationality - open answer;
• Education Degree - where they could choose ”Elementary School (4ºgrade)”, Middle School (6ºgrade),
”Junior School (9ºgrade)”, ”High School (12ºgrade)”, ”Professional school”, ”Bachelor”, ”Master”
and ”Doctor”.
• The frequency with which the user interacts with gestural interfaces (such as leap motion, tablets,
smartphones) - where they could choose Never, 1 to 2 times a month, 1 time a week, 2 to 3 times
a week, 4 to 5 times per week and every day.
C.4 Informative text of the second questionnaire of user tests
The following questionnaire is within the scope of my thesis ”ConnectoMotion” in which I intend to
create an application to visualize and interact with the human brain connectome through hand gestures,
using LeapMotion.
This questionnaire was structured as follows:
57
• A small form with some basic user information (without compromising anonymity).
• Usability classification of the interface.
• The duration of this questionnaire is approximately 3 minutes.
Thank you in advance for your availability.
58
DDataset File
59