Hindawi Publishing Corporation Mobile Information Systems Volume 2016, Article ID 1714350, 12 pages http://dx.doi.org/10.1155/2016/1714350 Research Article A Mobile Application That Allows Children in the Early Childhood to Program Robots Kryscia Ramírez-Benavides, Gustavo López, and Luis A. Guerrero Universidad de Costa Rica, San José, Costa Rica Correspondence should be addressed to Kryscia Ramı́rez-Benavides; kryscia.ramirez@ucr.ac.cr Received 30 May 2016; Revised 10 October 2016; Accepted 13 October 2016 Academic Editor: Laurence T. Yang Copyright © 2016 Kryscia Ramı́rez-Benavides et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Children born in the Information Age are digital natives; this characteristic should be exploited to improve the learning process through the use of technology.This paper addresses the design, construction, and evaluation process of TITIBOTS, a programming assistance tool for mobile devices that allows children in the early childhood to create programs and execute them using robots. We present the results of using TITIBOTS in different scenarios with children between 4 and 6 years old. The insight obtained in the development and evaluation of the tool could be useful when creating applications for children in the early childhood. The results were promising; children liked the application and were willing to continue using it to program robots to solve specific tasks, developing the skills of the 21st century. 1. Introduction are complex; however, theymight be presented in stimulating ways, such as robotics [10] and mobile applications. The Information Age is a period in human history in which This paper describes the design, construction, and evalua- the use of technological tools extensive and almost every tion process of TITIBOTS, a mobile programming assistance human activity is based on information computerization tool (PAT) [11] that allows children in the early childhood to [1]. Children born in the Information Age are called digital develop programs and execute them using robots. TITIBOTS natives [2]. Incorporating activities that promote the 21st has an icon-based interface and it integrates visual program- century skills [3] in the learning process helps digital natives ming, robotics, and mobile devices in one tool. Moreover, to develop abstract thinking abilities and apply them in an the main issues and lessons learned during this process are organized way [4–6]. described. Many authors have discussed the importance of program- TITIBOTS was developed and evaluated applying ming as a capability for digital natives. Papert [4, 5] described several Human-Computer Interaction techniques such as programming as a tool that develops a comprehensive set of participatory-design, experience prototyping [12], and usa- interconnected capabilities such as problem solving, team- bility testing [12, 13]. work, persistence, logical-mathematical thinking, abstrac- The main research question driving this work was to tion, and creativity. Resnick [7] considers programming the assess the possibility of children aged between 4 and 6 years new literacy. He states that, “in addition to writing and to use a PAT based on mobile interfaces and robots. A major reading, programming helps organize thoughts and express difficulty in this research work was to create a graphical ideas.” Furthermore, the skills gained with programming and interface to be used by kids between 4 and 6 years old. Similar robotics are a key aspect in the development of children and works have been conducted in the past years. However, these their future career [3, 8]. projects are focused mainly in older children. The proposed There is a global deficit of science, technology, engineer- mobile PAT was combined with robots to promote a fun ing, and mathematics (STEM) professionals [9]. Therefore, exploration of complex concepts involving sensory, motor, countries are challenged to promote them. STEM concepts and socioemotional skills [8, 10]. 2 Mobile Information Systems The evaluation of TITIBOTS was conducted in Costa Table 1: Robot programming tools ordered by age. Rica and had the participation of over 50 children in the early childhood during the different evaluation stages. This Programming tool Ages Publication year evaluation showed that children in the early childhood are PRIMO 4–7 2013 able to program robots using mobile applications through KIBO 4–7 2014 TITIBOTS. Wonder Workshop 5–8 2013 LEGOWeDo 6–12 2005 2. Robot Programming Assistance Logo 7–12 1970 Tools for Children LEGO TC Logo 7–12 1988 LogoBlocks 7–12 1994 This section presents the summarized results of a systematic LEGO RoboLab RCX 7–12 1994 literature review conducted to find PATs to program robots usable by children under 12 years old. The review was con- PicoCrickets y PicoBlocks 7–12 2006 ducted in three academic databases: ACM and IEEE digital LEGOMindstorms NXT 7–12 2006 libraries and Springer Link. LEGOMindstorms EV3 7–12 2013 The search query included the following keywords: pro- MoWay 7–12 2012 gramming environment, programming kit, programming miniBloq and RobotGroup 8–12 2008 interface, programming language, robots, robotics, and chil- Enchanting 8–12 2010 dren. The selection criteria were established to find original MicroWorlds EX Robotics 10–12 1994 research papers presenting empirical studies in real contexts. In the 1970s, Papert and his students atMIT created Logo, a programming language and robot [4]. Logo acquired a great interactive games [23]. Also in 2013, Yacob et al. presented popularity in the 1980s. In 1985 MIT researchers changed Primo. Primo is a tangible programming interface for chil- the robot for a graphical representation. Later that decade, dren between 4 and 7 [24]. in 1988 Resnick and Ocko developed at MIT Media Lab one KIBO appeared in 2014. KIBO is a robotic kit for children sensor and actuator system called LEGO TC Logo. This idea between 4 and 7 years old that uses a tangible programming was a commercial success and the tool reach thousands of interface [25]. KIBO is the result of the research conducted classrooms [14]. by Marina Bers and her research group DevTech at Tufts MicroWorlds was released in 1993 by LCSI [15]. At University. one point MicroWorlds also incorporated the possibility of Table 1 summarizes the results of the systematic literature programming a robot. In 1994, RCX brick (a programmable review.The programming toolsmost similar to TITIBOTs are device and PAT) was released. This PAT was based on icons KIBO, PRIMO, Wonder, and Software LEGO WeDo, since and allowed the users to create diagrams (programs) that they are focused on children under 6 years old. KIBO and controlled the RCX [14]. PRIMO use tangible interfaces instead of mobile systems, LEGO WeDo set, designed by LEGO in 2005, consisted WeDO do not use robots, and Wonder app is restrictive of of a set of mechanical parts used to build and design LEGO the robot that can be used. TITIBOTS uses an open set of models. LEGO WeDO had an easy to use, icon-based PAT commands that allows the use of any robot, and it was design [16]. LEGO Mindstorms NXT brick was released in 2006. It using an icon-based interface to allow children that do not was a visual programming environment that allows a novice know how to read to use it. programmer to easily create programs [17]. Smaller versions Most of the systems presented in this section are commer- of NXT were available commercially. These bricks were cialized by private companies.Therefore, there is no evidence called Pico-Cricket and they have an associate PAT called of their performance in academic literature. Moreover, longi- PicoBlocks. However, this brick was discontinued in 2010 tudinal studies have not been conducted to assess their overall [18]. MIT also approached their programmable brick with a impact in the educative process. puzzle based programming interface called LogoBlocks [14]. In 2007 miniBloq appeared. It was an open source multi- 3. TITIBOTS platform graphic PAT based on C++. It uses symbols to visu- alize, in real time, the possible coding errors [19]. Enchanting TITIBOTS is a mobile programming assistance tool devel- appeared in 2010. Enchanting is a programming environment oped for children in the early childhood. It allows children for LEGO Mindstorm NXT. It is based on Scratch and to create programs and execute them using a robot. supports leJOS NXJ [20]. MoWay, released in 2012, is a small Children between 4 and 6 years old are our goal user. autonomous robot with its own programming language [21]. Usually, at these ages children are still unable read or write. The brick EV3 of LEGO Mindstorms appeared in 2013. TITIBOTSprovides a simple graphical user interface using an It can be controlled using a remote control and a mobile iconographic approach.The design is intended to be intuitive application called “Robot Commander” which is available for and usable for children. It is fit for mobile devices and was iOS and Android devices [22]. designed to allow the use of any robot with an open set of In 2013, Wonder Workshop was presented. Wonder’s commands. main goal is that two small robots, Dash and Dot, teach chil- Figure 1 shows the main components of TITIBOTS: the dren over 5 years old basic programming concepts through robot and themobile application.We implemented an easy to Mobile Information Systems 3 (a) (b) Figure 1: TITIBOTS robot (a) and mobile user interface (b). Android application User interface Robot application Select robot Select robot Main screen Bluetooth Command type interpreter Control layer Command executor Command sender Figure 2: TITIBOTS architecture. use wizard that allows configuration and connection between (iii) Action (manipulation): turn on, turn off, grasp, the robot and the mobile application. The communication and release (green and red commands, see between the tablet and the robot is via Bluetooth.The teacher Figure 1) can configure the system and add or remove robots. This version of the system runs on Android devices. Once the (2) A program in the robot interprets and executes tablet is connected with the robot, the child can create a commands sent through the Bluetooth connection program dragging and dropping the available commands, from the mobile device to the robot. and lastly he/she can send it to the robot for execution. TITIBOTS architecture (see Figure 2) consists of two main The teacher specifies the available commands for each components: work session. A stage approach was developed allowing the (1) A mobile application runs on the tablet and shows a teacher to use a gesture to unlock new commands once he or simple interface commands and the workspace of the she thinks the child achieved a certain level of effectiveness PAT. Commands include the following: with the available commands (see Figure 3). The basic programming concepts addressed with the (i) Control: start and end (white commands, see use of the tool are algorithms and sequentialization. An Figure 1) algorithm is a set of instructions (steps) to perform a task. (ii) Movement (locomotion): forward, backward, It is important to emphasize that the algorithms are the left, and right (blue commands, see Figure 1) core of programming. Thus, each program is simply a list of 4 Mobile Information Systems Additionally, the preschool teachers defined several tasks to be programmed by children. Most of the tasks are move- ment sequences (i.e., move the robot from one place to another or move objects). 4. Evaluation TITIBOTSwas designed using a user-centered design follow- ing the ISO 13407 [27]. Furthermore, we performed a concept validation and a prototype evaluation.The evaluation process consisted of four stages, which can be observed in Figure 4. We conducted a group sketching activity [28] to design a preliminary version of TITIBOTS. The first stage of the evaluation process was a concept validation with experts and preschool teachers, in which iconography and interaction were validated. When we achieved a consensus, the best interaction proposals were evaluated with children. The second stage was the iconography and interaction validation with children. At this stage, we created a sketch that represents the interface, iconography, and interaction patterns. With the information gathered we developed the Figure 3: Stage approach used to increasingly add commands using first functional prototype of TITIBOTS. teacher’s gestures. In the third stage, we evaluated TITIBOTS functionality with children. For this purpose, we created a set of challenges for the children and used observations to evaluate their instructions that the computer must follow in a certain order behavior. The main goal of this evaluation was to see the (sequentialization). children’s reaction with the tool and to find difficulties. Among the concepts of robotics addressed are effectors, Finally, the prototype of TITIBOTS was evaluated in a actuators, locomotion, and handling. Effectors and actuators real scenario. Our testing scenario was a workshop at FOD are components of a robot. An effector corresponds to any with 4 to 5 years old kids.The last two stages were performed device that affects or modifies the environment. An actuator with children between 4 and 5 years old, because we wanted is any mechanism that allows the effector perform an action, to test the prototype with the younger end users. We follow for example, servos and lights.The locomotion system allows the methodology proposed by Nielsen to conduct a usability the robot to move through the ambient while handling test with users [29]. system allows the robot to articulate and reach objects in the Table 2 shows the associated metrics with usability environment. attributes and acceptance criteria for each attribute [29, 30]. In order to evaluate TITIBOTS software interface, we To collect quantitative and qualitative data, we used the designed and constructed a Mindstorm NXT robot. Our following measuring instruments [29, 30]: robot was able to move, turn a light on and off, and was provided with a claw that could be opened and closed. We (i) Semistructured interview: aimed at obtaining user designed and evaluated our system in collaboration with the profile information Omar Dengo Foundation (FOD), which is a nonprofit Costa (ii) Memory test: a questionnaire formeasuring the num- Rican organization created in 1987. Education experts work- ber of successfully memorized system functions ing at FOD provided a set of functional and nonfunctional requirements in a participatory design process. Based on the (iii) Recordings: video and audio recordings of the pilots gathered requirements the system should (iv) Evaluator’s booklet: a booklet in which the experi- menter that conducts the assessment procedure takes (i) use a metaphor that allows a clear understanding for notes, describes identified problems, and fills in infor- the kids, mation about average completed tasks and time spent (ii) provide a set of commands (8 to 10) that the robot can on task execute, (v) Satisfaction questionnaire: a questionnaire with two (iii) implement control structures [26], points used for users’ subjective evaluation. (iv) store the last programmed routine locally, In the following subsections, we will introduce the instru- (v) guide the user in a corrective programming process, ments and procedures used in the evaluation of TITIBOTS as (vi) connect via Bluetooth with the robot, well as the participants and results. Table 3 shows the participants in the evaluation of (vii) allow Text to Speech (TTS) capability, TITIBOTS per activity, distributed by gender and age, and (viii) have actuators: servos and lights, at least. Figure 5 shows the graphic user profile information obtained Mobile Information Systems 5 Table 2: Usability attributes and associated metrics. Usability attribute Metrics Usability goal (i) Average time used to complete a challenge the first (i) The average time to complete one challenge for Learnability time. children should be between 10 and 30 minutes. (ii) Average time of training. (ii) The average time of training should be between 30 and60 minutes. (i) Successful challenges completions should be above Efficiency (i) Total and percentage of successful challenges. 70%.(ii) Average time to complete a challenge. (ii) The average time to complete one challenge for children should be between 10 and 20 minutes. The novice users should memorize at least half of the Memorability Total and percentage of correct answers about theapplication. functionalities of the system, whereas the experiencedusers should memorize over 80%. Errors (i) Total and average unsuccessful attempt. (i) The average of errors be between 5 and 10 errors. (ii) Total and average of recovery unsuccessful attempt. (ii) The average recovery errors by user should be above60% of total errors. Satisfaction (i) Like or not like.(ii) Difficulty level. Percentage of satisfaction usually should be above 65%. by semistructured interview, which was conducted in a 13- wants to insert a command and the options appear. Fig- children scenario-based usability testing (third and fourth ure 6(b) shows the drag and drop interaction.The commands stage). are selected and dragged to the place in which the user wants to insert it. Finally, Figure 6(c) shows the sequence options: 4.1. Validation. In order to validate the interface design, monkey paw prints, dotted line, and no guide. iconography, and interaction, we created a set of instruments and validated the form, color, and possible interaction with 4.1.3. Procedure. The activity lasted four hours, at a rate 40 children. To perform the validation we carried out a lab- of approximately 24 minutes per child, three children at oratory test [31]. To achieve a good validation, we performed the time. The evaluator first introduced herself and asked highly controlled observations and qualitative data. the child his/her age. Then, the instruments were presented and the process explained. Three different validations were 4.1.1. Participants. This experiment was carried out at a Costa conducted: icon validation (10 icons), interface guides design, Rican Primary School. Two groups of children were included and interaction modes. in the validation, both groups with children aged 4 to 6 With the help of a graphical designer and FODs pro- years old. The selection of participants was made through a fessionals we constructed 10 icons that allowed the children nonprobabilistic sampling and at an intentional mode based to execute the actions described in the requirements. The on the availability of the teachers.Three evaluators conducted researcher asked the child themeaning of each icon andmade this validation and one person was responsible of the logistic annotations either if the child got it right or wrong and any aspects. other interpretation stated by the child. The second validation was the interface guides design. 4.1.2. Setting and Instruments. The validation was carried out Three different approaches were designed in order to deter- in the schoolyard, 10 feet away of the classroom entrance.The mine how to guide the children in the interface: a dotted line, setting consisted of four desks for toddlers at a distance of five amonkey pawprints, and an interfacewithout guidelines (see feet from each other. The children were inside the classroom Figure 6(c)). The three options were presented to each child until they were called by the one responsible of logistics and they were asked to explain the sequence order of a set and showed to a desk with one evaluator that conducted the of icons. Every evaluator presented the three possibilities in validation. random order. The elements used in the validation were a physical pro- The third validation focused on the interaction. Again, totype of the interface and a validation guide. The evaluator three options were available: insert (Figure 6(a)), drag and used the guide to take notes and follow the process. She drop (Figure 6(b)), and a guidedmode.The insert interaction checked on the questionnaire according with the children required the child to point the selected place, then the set reactions. The physical interface consisted of a sheet with of possibilities appeared, and the child selects the action the designed icons for programming, three sheets with the or command to insert. The drag and drop option allowed versions of the sequence options evaluated, and three sheets the children to point an icon, drag it to the desired place, with the interaction patterns evaluated. and drop it. The guided interactions allowed two options Figure 6 shows the interaction patterns and sequence every time. This validation activity was performed only once options evaluated. Figure 6(a) shows the insert interaction. per child (i.e., 13 validations per interaction mode were In this interaction, the user presses the place in which he/she performed). 6 Mobile Information Systems Pilot testing Data collection Validation–Paper prototypes Data analysis Preliminary design Development First validation with group gathered and Consensus of Qualitative data analysis stage experts and experts and sketched a Data preschool teachers preschool teacherspreliminary design integration Remarks Comments Iconography and Creation a sketch Rating elaboration interaction that represents the Controlled Second stage validation with interface, observations and children iconography, and qualitative datainteraction patterns Frequency graphs Frequency tables Subjective ratings Scenario-based usability testing– Comments prototypes (with end users) Semi-structured User profile interview information Third Testing process Task performance stage Evaluator’s (accuracy, Quantitative data analysiswith end users booklet for task- completion time) based user testing Objective measurements: Errors average Learnability (errors, recovery Efficiency Recordings errors) Memorability Evaluator’s report Errors Memory test Memorability Fourth Development and stage use in real settings Subjective measurements: Learnability Satisfaction Satisfaction questionnaire Satisfaction Figure 4: Evaluation approach. Table 3: Children that participated in the evaluation of TITIBOTS per activity. Activity Number participants Gender Age Male Female 4 5 6 2nd stage: iconography and interaction validation 40 (76%) 20 (50%) 20 (50%) 11 (27.5%) 19 (47.5%) 10 (25%) 3rd stage: testing process with end users 7 (13%) 3 (43%) 4 (57%) 7 (100%) 0% 0% 4th stage: deployment and use in real setting 6 (11%) 2 (33%) 4 (67%) 4 (67%) 2 (33%) 0% Total (percentage) 53 (100%) 25 (47%) 28 (53%) 22 (42%) 21 (40%) 10 (18%) Mobile Information Systems 7 100 Using the results of validation several changes in the design of the tool were implemented, the most significant 75 changes were (i) reducing graphical load of the interface (i.e., decreas- 50 100 ing background colors and figures, removing visual 85 distractions to allow focus on the relevant elements), 25 5446 (ii) performing visual closures (for attention), and use primary and secondary colors, 0 15 0 Family member Uses a smartphone Personally owns (iii) enlarging icon size. has a smartphone or tablet a smartphone or or tablet tablet 4.2. Testing Process with End Users. This evaluation was conducted in a workshop with 7 children. We gave one tablet and one robot to each child. Then, children were asked to try Yes No to accomplish three different tasks. Figure 5: User profile information. 4.2.1. Participants. Following Nielsen [32] recommendation of evaluating usability with 3 to 5 participants, we designed two groups of four children each one. However, due to a Table 4: Results of the evaluation of icons, interface, and interac- last minute problem, one child did not participate in the tion. evaluation. The evaluation was conducted by a total of 11 Age people including 7 observers, 1 mediator, 3 robot experts, andCorrect answers one cameraman. The mediator explained the process to the 4 5 6 Control children. The observers watched while children attempted to Icon achieve the given goals. The robot experts were present to fix Start 0% 0% 13% 0% 3% the robots if necessary. End 0% 0% 13% 0% 3% Forward 40% 90% 100% 42% 65% 4.2.2. Setting and Instruments. Theevaluationwas conducted Backward 40% 90% 100% 42% 65% at FOD’s Central Office, in a robotics laboratory. Four Left 40% 100% 100% 50% 70% working spaces were marked using adhesive tape in the floor, Right 40% 100% 100% 50% 70% each separated by 78 inches (see Figure 7). The central spacewas used to set cameras and for the observers tomovewithout Turn on 50% 80% 88% 50% 65% interfering with the work of children. We defined tutorials Turn off 50% 80% 88% 42% 63% and observation guides for every activity. Grasp 0% 20% 63% 17% 23% Release 0% 20% 63% 17% 23% 4.2.3. Procedure. The evaluation process was carried in two Interface sessions (an hour and twenty minutes each one). In the Dotted line 30% 20% 63% 8% 28% first session, a technical mediation was performed. A teacher Monkey paw print 30% 30% 63% 17% 33% explained the general functions of TITIBOTS, the inter- No-guide 30% 40% 50% 17% 33% action, and the meaning of each instruction. In the sec-ond session, a recreational mediation was conducted; in this Interaction mediation the teacher played with the children, and each Task completed 100% 100% 67% 50% 86% game introduced the instructions that TITIBOTS allowed. Drag & Drop 67% 100% 67% 100% 85% Before each session the teacher in charge did a welcome Insert 100% 67% 100% 50% 77% activity; each member of the crew was introduced. After the introduction, the teacher asked each child if they had tablets (user profile). 4.1.4. Results. We were focused on the children’s ability to The evaluation was designed to assess if the children were understand the interface and to interact with the tool. We able to achieve each task in a 30 minutes’ period. The tasks divided the results by age; 30% of the children were consid- were as follows: ered as a control group. Therefore, we have 25% of children (1) Move the robot from the start point, in a straight line, with 4 years old, 25% with 5 years old, 20% with 6 years old, going through a tunnel (must turn the light on inside and 30% control. the tunnel). Table 4 shows the results of this first validation. The start and end icons were not recognized at all. The grasp and (2) Move the robot from the start point, in a straight line, release had a 23% of correctness. All the other commands grab a ball in the end of the game area, and return to had over 60% recognition. As for the interface and interaction the start point. design we found that the results were not significantly (3) Move in a straight line to the ball, grab it, turn left, different. move, and release the ball. Percentage of students 8 Mobile Information Systems (a) (b) (c) Figure 6: Interaction patterns: (a) Insert and (b) Drag & Drop. (c) Sequence options evaluated: claws, dots, and none. observer looked the activity and took notes. At the end of the session a satisfaction questionnaire was conducted. 4.2.4. Results. Observations were performed in order to evaluate the usability of the software [33], to determine the necessity of a teacher’s intervention when using TITIBOTS, and whether it helps or not in the learning process to have a strong guidance. Usability metrics are shown in Table 5. Three challenges were designed for the children to solve. Nevertheless, we did not expect that they concluded all three challenges in the time frame provided.The evaluation showed Figure 7: Children working in the respective workspace. that the application has a good usability.The interface showed to be simple and intuitive. Moreover, all the participants showed interest in the application and want to keep using it after the activity. After the mediation activity started and the challenges We observed that recreational mediation has a strong were explained, each child started the challenges, and the influence in the use of the tool and the level of achievement Mobile Information Systems 9 Table 5: Usability goal outcomes. Usability attribute Usability goal Usability outcome Learnability Average time to complete a challenge the first time = 11.7 minutes (between 10 and 30 minutes) Successful Average time of training = 30 minutes (between 30 and 60 minutes) Successful Efficiency Percentage of successful challenges = 33% (>70%) Unsuccessful Average time to complete one challenge = 11.6 minutes (between 10 and 20 minutes) Successful Memorability Percentage of correct answers about the application = 93% (>80%) Successful Errors Average of errors = 4.3 (between 5 and 10 errors) Successful Average recovery errors = 0.7, 16% of total errors (>60%) Unsuccessful Satisfaction Like = 100% (>65) Successful Easy = 64% (>65) Unsuccessful in the challenges. In the recreational mediation session, the hours distributed in 4 days. Figure 8 shows some pictures of use of the applicationwas simpler. Otherwise, in the technical the workshop during different activities. mediation session, the children used the application as a During the workshop, the teacher played with the chil- remote control (i.e., placing a command and sending it dren, and each game introduced the instructions that TITI- several times to the robot). This was exactly what we did not BOTS tool allowed each day. The first day the teacher asked want the children to do, because we want them to create the each child if they had tablets (user profile). The last day was sequence of steps in theirminds andwrite thewhole sequence for the evaluation; it consisted to make three challenges over on the tool before sending it to the robot. a period of 60 minutes: Two of the participants in the nonmediated session ended frustrated and did not finishe the challenges. On the other (1) Move the robot from the start point, in a straight line, hand, all the children in the mediated session continued going through a tunnel (the player must turn the light trying until the time frame finished. As expected, the children on inside the tunnel). were only able to finish the first challenge, half of the children in the nonmediated session and all of them in the mediated (2) Move the robot from the start point, in a straight line, session. The main difference was that those who received the grab a ball in the end of the game area, and return to mediation succeeded with fewer attempts and less time. the start point. We found several problems in our system during the (3) Move in a straight line to the ball, grab it, turn left, evaluation. For instance, robot’s claws smashed easily. This move, and release the ball.We had the participation of forced a redesign in the software to avoid an open command two observers during the whole workshop, watching if the claw was already opened. Bluetooth connection was children’s actions and expressions. The challenges unstable; the software was redesign to reconnect automati- were explained, each child started the challenges, and cally. Commands needed to be placed in pairs (e.g., on andoff, the observer recorded the activity and took notes. At catch and release) were easier to understand by the children the end of the session a satisfaction questionnaire was than those in the form of keyboard (Forward and Backward, conducted asking for the robot and the application. Left and Right). The software interface was redesigned to allocate all the commands in pairs. Finally, real time feedback is required from the application to let the user know what is 4.3.3. Results. The most important result of the workshop happening. (obtained from evaluator’s report, recordings, and usabilitymetrics) was that the children were always happy and atten- tivewith the PAT.They found it easy to use and fun, according 4.3. Deployment and Use in Real Setting. Once we tested the to the satisfaction questionnaire. Furthermore, they did not proper functionality and design of TITIBOTS, we arranged have problems understanding the commands or anothermis- with FOD to create a 4-5-year-old robotic workshop in which cellaneous buttons such as clear screen, load program, and children use the tool to be introduced with programming disconnect (according to usability attribute: memorability). concepts. Usability metrics obtained in the test are shown in Table 6. FOD’s experts considered this workshop implementation 4.3.1. Participants. The workshop was designed for 6 chil- a success, because they felt that all the participants achieved dren. Originally, the children were three boys and three girls; the basic knowledge intended for the activity. Besides, teacher however, due to a last minute problem a girl substitutes one and observers consider the events in which each child of the boys. The girls ranged between four years and three mimicked the robot and acted the commands were crucial months to five years and six months. The two boys were four to the learning process. years and four months and four years and ten months. We found that older participants achieved an exceptional success rate. We think that the tasks were too easy for 4.3.2. Setting, Instruments, and Procedure. Theworkshop was them. The younger participants (4 years old) had difficulties; carried at FOD’s Central Office, and it was designed for 8 however, the interaction with the older ones helped. This 10 Mobile Information Systems Table 6: Usability goal outcomes. Usability attribute Usability goal Usability outcome Learnability Average time to complete a challenge the first time = 10.9 minutes (between 10 and 30 minutes) Successful Average time of training = 43.7 minutes (between 30 and 60 minutes) Successful Efficiency Percentage of successful challenges = 88% (>70%) Successful Average time to complete one challenge = 18 minutes (between 10 and 20 minutes) Successful Memorability Percentage of correct answers about the application = 100% (>80%) Successful Errors Average of errors = 3.4 (between 5 and 10 errors) Successful Average recovery errors = 2.3, 69% of total errors (>60%) Successful Satisfaction Like = 100% (>65) Successful Easy = 83% (>65) Successful (a) (b) (c) Figure 8: (a) Children planning the solution of challenge in a small whiteboard. (b) Child implementing the proposed solution using TITIBOTS. (c) Child watching a robot executes a program send using TITIBOTS. leads us to believe that a workshop integrating 4- to 5-year- metrics that are under the expected rate are caused by the old children is beneficial.The time frame of each session (two complexity of the tasks and not by the use of TITIBOTS. hours) is considered as the limit for the children, because at Satisfaction metrics shows that 100% of the kids liked the the end of the session they were exhausted. programming tool and robot (the 13 children in the third and fourth stages). Besides, 31% found the tool difficult to 5. Discussion use (although in observations and recordings this was not reflected). Instead, 77% of the children state that the robot The evaluation process showed problems of design, usability, was easy to use. As part of the satisfaction questionnaire, and functionality. All these problems were resolved, obtain- children were asked to draw what they liked best about the ing a more useful PAT.The evaluator’s report, the recordings, activity, 62% of the children drew a robot or a monkey, 8% and the usability metrics show that TITIBOTS: it complies drew a tablet, and 31% drew a tablet and a robot. When asked with the requirements given by the experts and it is easy and to describe their drawings, they said that they loved their pleasant to use for children. monkey’s robots and liked to tell them what to do. When comparing the third and fourth stage of the Even though the problems solved by children are basic, evaluation, we determined that the errors caused by the PAT moving the robot in a straight line or turning lights on and and the robots were reduced significantly.With this result, we off, we believe that problems with a higher level of difficulty consider that our system is ready to be deployed in regular can also be solved. However, the time spent in this evaluation learning activities. does not provide enough empirical evidence to support that Results show that TITIBOTS allow children to play and claim. Further evaluation will allow us to fully determine the to program robots in order to solve specific tasks. Moreover, impact of our tool used by children since the age of 4. We Mobile Information Systems 11 proved that programming concepts can be taught to children activities were successful, caught the attention of children, since age 4 and probably younger. However, at this age the and achieved the expected objectives for each one. amount of effort required to keep the children focused and Studies show that the use of robotics in the teaching- trying to solve a problem is high enough. learning process of children has accomplished that children Moreover, it is difficult for them to apply what they learn specific curricular content in STEM areas. In our study, learned in real-life problems, becausewe are trying to develop we have not tested any of this, but it will be held as part of the in them abstract thinking by programming robots. Our future work of this research. goal is to teach children to think and solve problems in a Through TITIBOTS, children develop soft and technical structured way, using algorithms that will benefit them later skills that are necessary nowadays. Moreover, we think that to resolve any problems that are present in their life, but modifying the environment and designing it appropriately the latter cannot be proven. In addition, programming is we can create strategies for the children to collaborate in the future literacy and we are working on it from an early solving a given problem. This is part of our further work: a age. collaborative version of TITIBOTS. One clear impact that could be perceived in the period of time in which we performed this evaluation was that teachers Competing Interests were able to keep the attention of children for longer periods of time by using technology. Several studies support this The authors declare that they have no competing interests. discovery [4, 7, 8, 32, 33]; however, we applied it not only to keep the attention of children but also to teach them concepts Acknowledgments that are usually too difficult to be understood by children that age, when presented in other ways. This work was supported by ECCI-UCR (Escuela de Ciencias The contribution to Mobile Information System is the de la Computación e Informática) and by CITIC-UCR design and evaluation of mobile interfaces for children to (Centro de Investigaciones en Tecnologı́as de la Información early childhood programmed robots in an intuitive and easy y Comunicación), Grant No. 834-B3-260. Thanks are due to way. the FOD (Fundación Omar Dengo) for helping us in the TITIBOTS enables programming and robotics to teach validation and evaluation of TITIBOTS. preschoolers while having fun playing; which promotes the development of skills such as problem solving skills, logical- References mathematical thought, abstraction, and creativity in the medium to long term. [1] G. S. Tjaden, “Measuring the information age business,” Tech-nology Analysis and StrategicManagement, vol. 8, no. 3, pp. 233– 246, 1996. 6. Conclusions [2] M. Prensky, “Digital natives, digital immigrants Part 1,”Horizon, vol. 9, no. 5, pp. 1–6, 2001. In this paper we presented TITIBOTS, a PAT that allows children in the early childhood to create programs using [3] M. Binkley, O. Erstad, J. Herman et al., “Defining twenty-firstcentury skills,” inAssessment and Teaching of 21st Century Skills, tablets and execute them with robots (allows the use of any pp. 17–66, Springer, Berlin, Germany, 2012. robot). We followed an extensive design and evaluation pro- cess. We used several techniques to evaluate the usability, [4] S. Papert,Mindstorms: Children, Computers, and Powerful Ideas,Basic Books, Inc., New York, NY, USA, 1980. including participatory-design, experience prototype, and usability testing. [5] I. M. Verner and D. J. Ahlgren, “Conceptualising educationalapproaches in introductory robotics,” International Journal of We consider the development of the project was success- Electrical Engineering Education, vol. 41, no. 3, pp. 183–201, ful. Children between 4 and 6 years found TITIBOTS easy to 2004. use. They manipulated the tool to infer the meaning and the [6] C.-T. Hsin, M.-C. Li, and C.-C. Tsai, “The influence of young use of the icons and commands. In addition, children were children’s use of technology on their learning: a review,” Educa- always interested, happy, and attentive while using the PAT, tional Technology and Society, vol. 17, no. 4, pp. 85–99, 2014. but most importantly, they had fun. It was evident for us that [7] M. Resnick, Learn to Code, Code to Learn, EdSurge, 2013. this PAT allows the children to play and to program robots in [8] M. Binkley, O. Erstad, J. Herman, S. Raizen, M. Ripley, and M. order to solve specific tasks. Rumble, “Defining 21st century skills,” 2010. The evaluation of TITIBOTS is also considered success- [9] M. U. Bers, Designing Digital Experiences for Positive Youth ful, because it was possible that children learn basic program- Development, Oxford University Press, Oxford, UK, 1st edition, ming concepts, such as sequential problem solving.Theywere 2012. able to verbalize their thinking when were asked. We believe [10] M. U. Bers, Blocks to Robots: Learning with Technology in the that the use of concrete and physical exercises (without the Early Childhood Classroom, Teachers College Press, 2008. tool) with the children facilitates the use of the developed [11] M. Koorsse, C. Cilliers, and A. Calitz, “Programming assistance environment. The children that use the tool acquired a first tools to support the learning of IT programming in South approach to the basic process for solving a given problem, African secondary schools,” Computers and Education, vol. 82, following the common programming steps: planning, imple- pp. 162–178, 2015. mentation (writing the program), and testing (sending the [12] M. Buchenau and J. F. Suri, “Experience prototyping,” in commands to the robot). The designed teaching-learning Proceedings of the Conference on Designing Interactive Systems: 12 Mobile Information Systems Processes, Practices,Methods, and Techniques (DIS ’00), pp. 424– [33] M. Bekker, W. Barendregt, S. Crombeen, and M. Biesheuvel, 433, New York, NY, USA, August 2000. “Evaluating usability and challenge during initial and extended [13] R. Mack and J. Nielsen, “Usability inspection methods,” ACM use of children’s computer games,” in People and Computers SIGCHI Bulletin, vol. 25, no. 1, pp. 28–33, 1993. XVIII—Design for Life: Proceedings of HCI 2004, pp. 331–345, [14] M. Resnick, F. Martin, R. Sargent, and B. Silverman, “Pro- Springer, London, UK, 2005. grammable bricks: toys to think with,” IBM Systems Journal, vol. 35, no. 3-4, pp. 443–452, 1996. [15] S. Einhorn, “MicroWorlds, computational thinking, and 21st century learning,” LCSI White Paper, 2011. [16] K. Mayerová, “Pilot activities: LEGOWeDo at primary school,” in Proceedings of 3rd International Workshop Teaching Robotics, Teaching with Robotics: Integrating Robotics in School Curricu- lum, pp. 32–39, Riva del Garda, Italy, 2012. [17] S. H. Kim and J. W. Jeon, “Programming LEGO Mindstorms NXT with visual programming,” in Proceedings of the Interna- tional Conference on Control, Automation and Systems (ICCAS ’07), pp. 2468–2472, October 2007. [18] D. Nam and T. Lee, “The effect of robot programming edu- cation by pico cricket on creative problem-solving skills,” in Proceedings of the 19th International Conference on Computers in Education (ICCE ’11), pp. 1–9, Chiang Mai, Thailand, December 2011. [19] G. Tomoyose,Minibloq, el Lenguaje de Programación Argentino Para Robots que Llega a Todo el Mundo, LA NACION, 2014. [20] A. Yera-Gil, Iniciación a la programación visual e interactiva y desarrollo de robótica educativa con Scratch y Enchanting, Uni- verisdad Pública de Navarra, Pamplona, Spain, 2010. [21] S. Romero, I. Angulo, I. Ruı́z, and J. M. Angulo, “Competencias y habilidades con el robot ‘MOWAY’,” in TAEE: Tecnologı́as Aplicadas a la Enseñanza de la Electrónica, p. 8, Universidad de Deusto, Bilbao, Spain, 2008. [22] M. Rollins, Beginning LEGOMINDSTORMS EV3, Apress, 2014. [23] V. Gupta, S. Gupta, and M. Greaves, “Play-i,” in Madrona Venture Group and Charles River Ventures, 2013. [24] F. Yacob, M. Loglio, D. Di Cuia, V. Leonardi, L. Rabago, and J. Valman, “PRIMO,” Solid Labs, 2013. [25] A. Sullivan, M. Elkin, and M. U. Bers, “KIBO robot demo: engaging young children in programming and engineering,” in Proceedings of the 14th International Conference on Interaction Design and Children (IDC ’15), pp. 418–421,Medford, Ore, USA, June 2015. [26] K. C. Louden and K. A. Lambert, Programming Languages: Principles and Practices, Cengage Learning, Boston,Mass, USA, 3rd edition, 2011. [27] International Organization for Standardization, “ISO 13407:1999 Human-centred design processes for interactive systems,” 1999. [28] S. Greenberg and R. Bohnet, “GroupSketch: a multi-user sketchpad for geographically-distributed small groups,” in Pro- ceedings of the Graphics Interface, pp. 207–215, Calgary, Canada, June 1991. [29] J. Nielsen, Usability Engineering, Morgan Kaufmann, 1993. [30] A. Granić andM. Ćukušić, “Usability testing and expert inspec- tions complemented by educational evaluation: a case study of an e-learning platform,” Educational Technology and Society, vol. 14, no. 2, pp. 107–123, 2011. [31] R. E. Eberts, User Interface Design, Prentice Hall College Div, Englewood Cliffs, NJ, USA, 1994. [32] J. Nielsen, Usability Engineering, Elsevier, 1994.