Contents
1.
Introduction
2. Structural
Design
2.1. First Iteration:
Wintersession Design
2.1.1.
Materials
2.1.2.
Description
2.1.3.
Results and Analysis
2.2 Second Iteration:
Final Design
2.2.1.
Materials
2.2.2.
Description
3. Programming and
Operation
3.1. Cricket 1:
Hallway Navigation
3.1.1.
Following a Wall
3.1.2.
Making a 90° Turn
3.1.3.
Making a Decision
3.1.4.
Escape Clauses
3.2. Cricket 2: Room
Navigation and Extinguishing
3.2.1. Determing the Presence of the Candle
3.2.2.
Approaching the Candle
3.2.3.
Positioning the Robot in Front of the Candle
3.2.4.
Extinguishing the Candle
3.3. Cricket 3:
Managing Control Flow
Appendix A: Floor
Plan
Appendix B: Cricket
Analysis
Appendix C: Control Flow
Diagram
Appendix D: Cricket 1
Code
Appendix E: Cricket 2
Code
Appendix F: Cricket 3
Code
Trinity College hosted its 7th annual Fire-Fighting Home Robot Contest in April 2000. The contest is a public robotics competition open to all entrants, regardless of age or experience. Each year, contestants come from a wide range of backgrounds and countries. The goal of the contest is to build a computerized robot that can find and extinguish a fire in a house. For the contest, each robot must navigate its way through a maze resembling a simple floor plan of a single-story house. A candle is placed in one of the rooms, chosen at random for each trial. The robot must find the candle and extinguish the flame, consistently and as quickly as possible. The ultimate goal of the contest is to have a robot that not only wins the contest, but can be adapted for commercial use, fighting actual fires in homes, businesses, warehouses, and other buildings. We decided to enter this contest after participating in the Wintersession Robotics Design Studio at Wellesley College.
Very extensive specifications were provided to all contestants prior to the competition. The dimensions and layout of the maze were given to the contestants prior to the contest. The walls of the maze were painted white. The floor was painted black, with white lines at each doorway. There was also a white circle (12" in diameter) on the floor around the candle. Additional challenges, such as floor ramps and furniture, could be added for scoring bonuses. Once the robot was started, the contestants could not touch their robot until it had finished the run.
Because the specifications of the environment were so thorough, it was possible for entrants to create a robot that could navigate very well in the contest maze, but which could not function at all if any of the specifications were modified even a small amount. One such simplistic way to navigate the maze was to use dead reckoning. Using this method, the robot would be programmed to travel the exact distances necessary to navigate the hallways and rooms of the contest maze. In such a model, the robot would not interact with its environment while in the hallways. Environment interaction is unavoidable in determining which room contains the candle and in moving towards the candle, as the room and exact location of the candle are determined randomly at run-time.
We chose not to use any form of dead reckoning, deciding rather that we thought it more important to create a robot with the capability of navigating any maze of similar design to the one used in the contest. With this flexibility, we believed our robot would have greater value, as it would have more potential to be adapted to different types of rooms and buildings, a feat which would have been significantly more difficult if our robot had used dead reckoning.
Our robot used a wall-following algorithm, with the added feature of constantly checking for doorways on either side. It had the ability to make random decisions when presented with more than one viable direction of travel. The navigational system will be discussed in great detail later in this paper.
The structural design of our robot is discussed in Section 2. We built our first robot, described in Section 2.1, during the Wintersession course and then, using what we had learned in the process, constructed a new robot, which was our final design. The final design of the robot is described in Section 2.2. The programming aspect of the robot is discussed in Section 3. We used three Crickets to control the robot. Section 3.1 describes the program run on Cricket 1, which controlled the robot as it navigated its way through the hallways of the maze. Section 3.2 describes the program run on Cricket 2, which controlled the robot once it had entered a room; Cricket 2 was responsible for finding and extinguishing the candle. Section 3.3 describes the program run on Cricket 3, which manage the overall control flow. The code in its entirity can be found in Appendices D, E and F. Although our final robot was able to complete all of the basic tasks, we did not enter our robot in the competition because we were unable to intengrate the parts sucessfully.
In order to meet the requirements of the contest, the robot needed to meet certain structural specifications. The contest required that the robot be able to navigate the maze and extinguish the candle. The contest rules also specified that no part of the robot could extend beyond 12.25" in any direction. Additional factors that we took into consideration while building our robot included the use of space, the distribution of weight, and the sturdiness and stability of the robot.
2.1. First Iteration: Wintersession Design
We built our first robot over Wintersession 2000. Since we had not had much experience planning and buliding structurally sound vehicles, there were many serious structural flaws in this design.
We used a Handy Board to program and control the robot. This decision was made for several reasons. First, we had more experience using the Handy Boards than the Crickets. Second, the Handy Boards have more sensor ports, and we anticipated using many sensors. Finally, the launch command in Handy Logo allows for multiple threads to run concurrently; Cricket Logo has a less elegant way to simulate currency. The entire physical structure of the robot was built out of Lego pieces. We tested several types of sensors before deciding which to use, and found that infrared (IR) sensors were best able to detect the light of the candle. We also tested, but did not use, heat and light sensors. Finally, we decided to use shaving cream to extinguish the candle. Since the contest was geared toward the eventual use of a robot to fight actual fires, we wanted a method that could be adapted to work on a larger scale. A shaving cream can, with its propulsion system, is somewhat akin to a fire extinguisher, and both smother a flame with foam. We found that in order for the shaving cream to shoot far enough to extinguish the candle, we needed to melt the nozzle down, reducing its diameter significantly. This was done by inserting a needle into the nozzle of the can and melting the plastic around it. Thus, by reducing the size of the spray, we increased the force behind it, and the shaving cream traveled much farther.
The Wintersession robot was a simple vehicle with four wheels that carried the can of shaving cream, the Handy Board, and the pushing mechanism. The frame of the vehicle was constructed from two axles, connected on each side by two long Lego bricks and braced together in the middle. We built a cage, which rested on the front half of the frame, to hold the shaving cream can in place. The pushing mechanism was attached to the top of the cage. The Handy Board rested on back part of the frame. Two motors, attached to the frame just behind the Handy Board, powered the rear wheels.
A great deal of torque was required to push down hard enough on the shaving cream can. The pushing mechanism that we devised was a gear train with a 27:1 reduction. A gear rack inserted into the back of the gear train pushed down on the can, dispensing the shaving cream. On each side, the axles of the gear train extended into a wall, two Lego bricks in thickness. The motor was built into one of the walls. These walls served as bracing, holding the gear train together and in place, and anchoring the motor. Due to the amount of torque required to dispense the shaving cream, it was necessary to anchor both the gear train and the motor very securely.
In building this robot, we focused on extinguishing the candle and not on navigating through the maze. The robot was able to move forward and zigzag towards the candle. However, its locomotion system was impractical for meeting the navigational requirements. The driving wheels were located at the rear of the robot, with two unpowered wheels at the front. The front wheels therefore carried the majority of the weight (the shaving cream and its cage) but were not powered directly. As a result, the front wheels generated a considerable amount of friction in turning and more power was required to propel the robot.
The cage which we built up around the can of shaving cream was a very inefficient use of space and weight; a less solid structure would have sufficed. Additionally, two thin axles supported the weight of the entire vehicle, and the connecting sides were not sufficiently secure, so the entire robot was structurally unsound. Additionally, we found that by melting the nozzles of the shaving cream cans around a needle, the spray could be made incredibly powerful, but required a higher level of accuracy than we were not able to achieve with our sensor readings. The melted nozzles also produced a high degree of variance between cans, since the angle at which the shaving cream was expelled depended on the angle of the needle, which was virtually impossible to keep constant from one melting to the next.
The pushing mechanism that we created for this robot was very effective and small in size. We found no serious faults with our design and kept it intact for our final design.
2.2 Second Iteration: Final Design
We made some changes in the materials used for the final iteration of the design. Most significantly, we chose to use Crickets instead of a Handy Board, because we wanted to use bathroom sensors, which are only compatible with the Cricket interface. (See Appendix B for a description of some of the problems we encountered in using the Crickets.) Bathroom sensors, a type of infrared sensor, give accurate distance readings for distances greater than 5cm by sending out IR light and measuring how much comes back. We also used two other types of sensors on our robot. We used two candle sensors, IR sensors that do not send out light, but measure ambient IR light. This was useful in distinguishing the candlelight from the lighting in the room. Finally, we used smaller, basic IR sensors that are similar to those we used with the Handy Board. We used these in the shaft encoder, and in detecting white lines on the ground. We continued to use Lego pieces for the majority of the physical structure of the robot. This proved to be a good decision, as Legos are easily attached, are relatively stable, and allow for modularization. We also added furniture casters to the structure, in addition to two sturdy wheels. The casters allowed for greatly reduced friction in turning, and the new wheels provided better support for the entire structure. We continued to use shaving cream as our method of extinguishing, and tested many different brands of shaving cream. We found that Regular Gillete Foamy shot the farthest. By using this particular brand, we no longer needed to melt the nozzles of the cans. Taking out this step allowed for less variability between the different cans we used. However, without melting the nozzles of the cans, we needed the robot closer to the candle in order to extinguish it. Because of this change, we ran into some trouble with lighting the propellant of the shaving cream can on fire. We found that this was only a problem when the can was nearly empty, so we cut back the number of trials we used on each can. Fortunately, none of the fires lit by the propellant were serious.
The structural design can be broken up into three components: the base, the wheels and gearing, and the shaving cream and pushing mechanism. The base of our final structure was much more stable than that of the version we built over Wintersession. It had a solid floor made of beams that were braced together. The front of the structure was wide enough to hold the can of shaving cream and the two motors and gear trains. The back of the base was slightly narrower, allowing room for the three Crickets, one battery pack, and many different sensors at varied locations.
The robot used five bathroom sensors to gather information about the surroundings of the robot. We initially planned to use seven bathroom sensors (two facing left, two facing right, two facing back, and one facing forward) but we discovered that having a large number of devices connected to the bus ports increased the likelihood of faulty sensor readings and random failure of the Crickets.
In order to reduce these errors, we used only five bathroom sensors (two facing left, two facing right, and one facing forward). The two on the left were used in keeping the robot parallel to the left wall and in sensing doorways on the left side. The two on the right were used to sense doorways on the right side, and also to align the robot parallel to the right wall in the absence of a wall on the left. The one on the front was used to sense when the robot was approaching a wall. By reducing the number of bathroom sensors, we arrived at a more efficient solution; we had initially planned to use the back sensors in alignment as well, but this proved unnecessary.
Our initial gearing system included a pulley mechanism as the first gear reduction. This had to be replaced because it caused significant slippage. We went through several iterations of the gear train before constructing the final one. The gear trains were encased in two towers that also provided the base for the pushing mechanism. In order to accommodate the gearing system, other minor modifications had to be made to the structure of the gear towers. We used a gear reduction of 9:1, in addition to the internal gearing in the motors, which gave enough torque to propel the weight of the structure while still maintaining ample speed to navigate through maze within the allotted six minutes. The gear train was comprised of 8-, 16-, and 24-tooth gears: an 8-to-24 gear pair from the motor, then a 16-to-16 pair for purposes of spacing, and finally another 8-to-24 pair connecting directly to the wheel.
We tried a number of different tire options through the course of building this robot. We first used Lego tires with a plastic center and a rubber outer ring. Under the weight of the robot, the flexible rubber part of wheels bulged, causing a good deal of friction. In order to make the wheels more solid, we filled the rubber outer rings with hot glue. Filling this space lessened the bulging and decreased the surface area in contact with the floor. The wheels then ran with less friction, but it was difficult to maintain an even thickness of glue and to mold it such that the tire treads were straight; the resulting wheels caused the robot to veer noticeably.
The final robot used a commercially designed wheel with a wider tire and a more solid construction. We extended the base to allow for an extra beam on the outside of each wheel to stabilize the wheels and keep them perpendicular to the floor.We also added bushings to fill the axles, ensuring that the gears could not slip out of place. In addition to the two wheels located near the front of the robot, two casters supported the base, which allowed for easy turning with minimal added friction.
Initially, we placed both casters at the back of the robot, creating a rectangular shape with the wheels and casters. However, we were somewhat concerned with the distribution of weight in this design. Most of the weight of the vehicle is contained in the shaving cream can, the columns that hold the gear trains and support the pushing mechanism, and the pushing mechanism itself. All of this weight was concentrated at the front of the base. Thus, if the robot had been knocked off balance, it would have tipped over and not been able to recover and continue its run. Furthermore, we planned to enter the robot in the "Non-Dead Reckoning" mode, in which ramps were placed in the hallways; we had to ensure that the robot would be able to successfully go down the ramp without bottoming out or tipping over. To alleviate this problem, we moved one of the two casters from the back to the front, and centered both of the casters, creating a diamond shape (see Figure 1).
We were pleased with the pushing mechanism that we created over Wintersession and kept it fully intact (see Figure 2). From the columns housing the gear trains, we built up two hollow columns to which the pushing mechanism was secured. The additional height was necessary because the shaving cream nozzle needed to be higher than the 6" to 8" candle; the stream of shaving cream arced downward, so the flame needed to be lower than the point of origin of the shaving cream. Moreover, the base of the pushing mechanism was deeper than the gear columns, so we made the hollow columns larger to allow for a more secure attachment of the pushing mechanism. As stability was one of our main goals in this structural design, we felt that the additional size and weight of the two hollow towers were worth the added stability they provided.
Figure 2: The pushing mechanism.
The can of shaving cream sat on top of a simple platform, built up from the base so that the nozzle was high enough. The can was anchored by Lego bricks at the corners of the platform to ensure that it was correctly placed under the pushing mechanism. The space under the can housed the front bathroom sensor (see Figure 3). Since the bathroom sensors are accurate in ranges greater than about 5cm, we wanted to ensure that a wall was never closer than 5cm to the sensor. We placed the front bathroom sensor in the recessed space under the can to prevent inaccurate readings. Additionally, this placement centered the sensor in the middle of the robot. Because of the sensor's recessed placement, it also remained cleaner throughout the many shaving cream trials.
Figure 3: Front view of the robot. The front bathroom sensor is in
the recessed space directly under the shaving cream can.
3.1. Cricket 1: Hallway Navigation
The navigational system of the hallway mode was comprised of four parts: following a wall, making a 90° turn, making a decision, and escaping from problematic situations. We tried a number of different methods for each of these components. Although we ended up with procedures that worked for each component, we were not able to integrate them successfully in the time we had remaining.
The most basic function of the robot was to advance down a hallway without running into the walls on either side. To do this, it used an algorithm that kept the robot parallel to the wall on its left, while remaining a reasonable distance away from the wall so as to stay in the center of the hallway. We tried several different alignment algorithms, but all used the same sensors and principles for determing whether or not the robot was parallel to the wall. First the two bathroom sensors on the left side were checked and their values compared. If the values were within a certain range of each other, they were considered to be "equal." The robot was then considered to be parallel to the wall and could continue going forward. If the back left sensor was significantly closer to the wall than the front left sensor, the robot turned to the left until it was once again aligned properly. If the front left sensor was significantly closer to wall than the back left sensor, the robot turned to the right until it was realigned. By adjusting its orientation with respect to the wall in this way, Cricket 1 kept the robot oriented in a parallel/perpendicular fashion to the walls of the maze.
The main difference between the three algorithms we tried is the timing of the sensor checking and adjustments made to the alignment. We first used dynamic alignment, in which Cricket 1 checked the left sensors continually as the robot moved forward. If it determined that the robot was no longer parallel to the wall, it turned off one wheel to start turning the robot. It then waited until the sensors were "equal" (within the specified range), and then turned both wheels back on. We had hoped that by correcting its alignment as it progressed, the robot would make better time through the maze. However, it proved to be more problematic than time-saving. By the time Cricket 1 determined that the robot was no longer parallel to the wall and acted on this, the position of the robot had changed significantly, since the robot had continued to move forward while Cricket 1 ran through the program. Becuase of this, the robot's course was usually not changed soon enough, and it would then run into the wall.
We then switched to static alignment, in which the robot stopped frequently to check its alignment. After turning both wheels off, Cricket 1 would check the alignment in the same manner, by comparing the left sensor values, and then start the turn. It still waited until the sensor readings were "equal" before stopping the turn. This solved the problem of not correcting the robot's course quickly enough, but we still ran into problems due to the fact that Cricket 1 was trying to check the alignment while the robot was turning. Again, the Cricket was acting on outdated and therefore inaccurate information, so it often turned too far, putting it out of alignment in the other direction, before Cricket 1 recognized that the robot had aligned and stopped the turn.
We then arrived at our final alignment algorithm. The robot stopped and Cricket 1 checked the alignment. If the robot was not parallel to the wall, it "tweaked" to the right or to the left by turning for a very short, specified amount of time, as in the procedure below. (Many of the procedures in the body of this paper are slightly modified for clarity. See the appendices for the final code in its entirety.)
to tweak-right
right-wheel thatway on
left-wheel thisway on
delay 50
;similar to the wait command,
but shorter
both-wheels off
end
After one tweak, the robot stopped and Cricket 1 checked the alignment. If necessary, it tweaked again. Thus, Cricket 1 always acted upon accurate information about the current position of the robot. This alignment procedure worked very well, although the frequent stops did increase the running time of the robot. Had we perfected the rest of the program, we would have experimented with the frequency of these stops. We had hoped to alter the code such that the robot would stop less frequently while staying sufficiently aligned.
The robot stayed in the center of the hallway by using a similar process. Through experimentation, we determined the range of sensor readings that indicated an appropriate distance from the left wall. Every time the robot stopped and Cricket 1 checked its alignment, Cricket 1 also checked its distance from the left wall. If both left sensors were within the range, the robot was more or less in the middle of the hallway and could continue to go forward. The range used was deliberately small, so that only minor adjustments were needed to keep the robot centered. If both sensors were outside of the range, the robot made a small jog to correct this. For example, if both sensors were too far away from the left wall, it jogged to the left by first turning left briefly (similar to the tweak procedure) and then going forward briefly, as in the procedure below.
to jog-left
right-wheel thisway on
left-wheel thatway on
wait 2
both-wheels thisway on
repeat 2
;by experimentation, we found that
the robot
;needed to
go forward for the amount of time
;it took
to repeat get-sensors twice
[get-sensors
if (interrupt)
;checking to see
if it needs to turn
[stop]]
end
After calling this jog procedure, Cricket 1 turned the robot to the right briefly to realign it with the wall. If both sensors were too close to the left wall, it jogged to the right in the same manner. At the end of the jog, Cricket 1 checked the alignment and position of the robot again, and readjusted if necessary. The jogs made rather minor corrections, so occasionally the robot needed to make two consecutive jogs to get back into the middle of the hallway. However, since we deliberately made the range small enough to prevent this from occuring most of the time, the robot did not usually have time to get very far from the center of the hallway before Cricket 1 corrected its course, so one jog was almost always sufficient.
The alignment procedure also ensured that, if the robot arrived at an intersection, it was oriented in a parallel/perpendicular fashion to the walls of the maze. When Cricket 1 checked the alignment of the robot, it also checked for doorways on either side, indicating either a room or another section of the hallway. If either front side sensor detected a doorway, the wall-following procedure was interrupted because a decision needed to be made about which way the robot should proceed. Before making a decision, however, Cricket 1 ensured that the robot would be properly oriented by backing the robot up until it was back in the hallway out of which it had come and then realigned to the left wall. After realigning, the robot went forward a specified distance, measured with the shaft encoder (explained in Section 3.1.2). At that point, Cricket 1 evaluated its position and decided which way to go.
A second basic function that the robot needed was the ability to turn a corner. An approximation of a 90° turn was sufficient, since it could correct errors of a small degree by using the alignment procedure described in Section 3.1.1. We tried three different algorithms to solve this problem. In all of them, the turn was triggered by a decision made after the wall-following procedure was interrupted. Initially, we only considered the case in which the robot was approaching a wall to the front, as indicated by the front bathroom sensor reading crossing a certain threshold. At this stage, the front-wall case directly triggered the turn procedure. Once we had a turning procedure that worked, we added in the decision-making step. Once that worked, we added in checks for all the cases that necessitated an interruption (a front wall, a right doorway, and a left doorway) so that all of these conditions would trigger a decision.
Our first turning algorithm was dependent entirely upon sensor readings. Cricket 1 began to turn the robot when the front sensor detected an approaching wall. It continued to turn until the front sensor reading indicated that there was no longer a wall directly in front of the robot. This was determined by a different threshold value. Once the robot had completed the turn, as indicated by the front sensor, it went forward again and returned to the wall-following procedure.
However, there were many problems in this method. Although the robot could make relatively accurate 90° turns using this procedure, on the whole it was unreliable because there were many situations in which the front sensor read erroneous values. There were a number of positions in which the front sensor would indicate that there was no front wall near the robot, while in actuality the robot had not completed the turn. Sometimes the sensor detected the wall at such a sharp angle that it indicated that the wall was farther away than it really was. Sometimes the sensor detected a wall across an intersection and considered that wall to be the front wall, which also resulted in erroneous readings. The sensor value sometimes did not cross the threshold at all, causing the robot to turn forever. We therefore abadoned the idea of a sensor-based turn.
Next we tried a timed turn, also triggered by an approaching wall to the front. Cricket 1 started the turn and waited for a specified period of time before ending the turn. Since this procedure did not rely on sensor readings and the environment, it was much more reliable, although determing the appropriate period of time required some experimentation. However, the degree of the turn was still not consistent because the amount that the robot turned depended on the strength of the batteries, which varied greatly as a function of how long the batteries had been used.
We then added a shaft encoder to the gear train that counted the number of axle revolutions. The shaft encoder consisted of a cardboard disc on one of the axles in the gear train, and an IR sensor. The disc had four sections cut out, leaving four sections intact, like a fan with four blades. The IR sensor was pointed at the disc, positioned very close to it to distinguish between those sections of the disc that were intact (the blades) and those that were missing the holes (see Figure 4).
Figure 4: The shaft encoder.
If the sensor value was above a certain threshold (rev-thold in the code below), a blade was in front of the sensor. If the sensor value was less than that threshold, a hole was in front of the sensor. By waiting until the sensor had detected a blade passing a certain number of times, the shaft encoding method was able to accurately determine how many revolutions the wheels had made. If, after interrupting the wall-following procedure, Cricket 1 decided that the robot should turn, the turn procedure started the turn and began counting axle revolutions. After a certain number of revolutions, which was determined experimentally (turn-revs in the code below), it stopped the turn, as in the procedure below.
to turn-right-corner
left-wheel thisway on
right-wheel thatway on
repeat turn-revs
[waituntil [sensora >
rev-thold]
waituntil [sensora <
rev-thold]]
both-wheels brake
end
The shaft encoder provided the most accurate measurement of the turn. Unlike a timed turn, a turn made with a shaft encoder did not vary with battery strength. The shaft encoder could not account for any slipping that occurred in the axles or wheels, but we did not encounter any problems in that area. Unlike in the sensor-based turn, the shaft-encoder ensured that the robot turned a consistent number of degrees each time. However, the shaft-encoded turn was unable to take into account any information about the environment, which caused other problems. Since it always turned the same amount, it only ended up in the proper position and orientation if it had begun in the proper position and orientation. For example, if the robot ended up in the middle of an intersection and it was oriented at a 45° angle to the walls of the maze, the turning procedure had no way of detecting or correcting this. The robot then turned approximately 90° and was again at a 45° angle to the wall. There also was no way of interrupting the turn. It was thus possible for the robot to began the turn from a skewed position and run into a wall. We were able to compensate for many of these problems by implementing escape clauses, discussed in Section 3.1.4.
After the turn, the robot encountered two problems if it immediately tried to align to and follow the left wall. The first problem was that there was often no wall to its left for it to follow. The second was that as soon as it returned to the wall-following procedure, it started looking for doorways again. It therefore often turned and went back the way it came, since it had no way to distinguish between the hallway out of which it had just come and a new doorway. We altered the code so that the robot would go forward a certain distance, measured with the shaft encoder and with certain checks for extenuating circumstances (see the code below), to ensure that it was out of the intersection before trying to follow the left wall again.
to corner-forward :distance :lwall
repeat 4
[both-wheels thisway on
repeat (:distance / 4)
[waituntil
[sensora > rev-thold]
waituntil [sensora
< rev-thold]]
both-wheels brake
if (newir?) [stop]
if (:lwall) [align]]
end
Using this procedure, the robot went forward for only one-fourth of the necessary distance at a time. After going one-fourth of the distance, it stopped and checked for two things. The first check was for a message from Cricket 3 indicating that the robot had entered a room. Given the specific layout of the maze, the robot only entered a room immediately after turning a corner, so this was the only place in the code where it was necessary to check for this signal. (Although we wanted our robot to be adaptable to any similar maze, we did not have the time to account for all possible variations. We therefore used some of the specific characteristics of the contest maze in programming the robot.) The second check was for a wall on the left. If there was a wall on the left, the robot aligned to it before continuing.
There were a number of different possible situations in which the robot could turn. In some of these situations, the robot had no choice but to turn in a certain direction. In other situations, the robot had a choice between turning either right or left, or between turning or going straight. (Given the design of the maze, there were no situations in which the robot could choose between three viable options. Again, we did not have the time to account for all possible variations in similar mazes.) In these situations that involved a choice, we wanted the robot the choose randomly between the two feasible options. Without this element of randomness, there would have been parts of the maze that it would not have reached.
We implemented an algorithm for making a decision that was called every time the robot entered an intersection of some kind. An intersection might be a turn in the hallway, the meeting of two branches of the hallway, or an entrance to a room. All of these intersections were treated in the same way by the decision-making procedure, so all openings were considered doorways. Thus a doorway might lead into a room, but it might also lead into another section or branch of the hallway.
Cricket 1 first checked the values of all the bathroom sensors and determined which of three conditions were true: a wall directly in front of the robot, a doorway to the left of the robot, and a doorway to the right of the robot. If there were walls to the front and to the left, and a doorway on the right, Cricket 1 decided to turn right. Similarly, if there were walls to the front and to the right, and a doorway on the left, Cricket 1 decided to turn left. If there was a wall to the front and a doorway on each side, Cricket 1 choose randomly between turning right and turning left. If there was a wall to the left only, and no wall to the front or to the right, it choose randomly between going forward and turning right. Finally, if there was a wall to the right only, and no wall to the front or to the left, it choose randomly between going forward and turning left. To make a random decision, it used the random operator provided by Cricket Logo to generate a psuedo-random number between 0 and 32767, and then used the modulus operator (%) to report a random number within the appropriate range, as in the code below.
to decide
get-sensors-special ;checks the values of all the bathroom
sensors
;and sets the variable fwall, ldoor, and rdoor
ifelse (fwall)
[ifelse (rdoor and ldoor)
[setdecision
((random % 2) + 1)]
;sets decision to
either 1 or 2
[if (rdoor)
[setdecision 3]
if (ldoor)
[setdecision 1]]]
[ifelse (rdoor)
[setdecision
((random % 2) * 3)]
;sets decision to
either 0 or 3
[ifelse (ldoor)
[setdecision
(random % 2)]
;sets decision to either 0 or
1
[setdecision
0]]]
end
After this procedure was called, Cricket 1 checked the value of the variable decision to determine which way the robot should go next. A value of 0 indicated that the robot should go forward. A value of 1 indicated that the robot should turn left. A value of 2 or 3 indicated that the robot should turn right. If Cricket 1 decided to turn right, it was possible that there was a wall to the robot's left, but this was not necessarily the case. If there was a wall to the left, decision was set to 3 and Cricket 1 used that wall to align the robot during the turn. If there was no wall to the left, decision was set to 2 and Cricket 1 did not try to align to it during the turn; given the layout of the maze, this situation only occured when the robot also would not have a wall to its right to which it could align during the turn. The other two decisions did not require this differentiation. If Cricket 1 instructed the robot to turn left, there was clearly no wall to the left. If the decision was made to go forward, there was either a wall to the right or to the left; Cricket 1 simply checked for a left wall, aligned to it if it was there, and aligned to the right wall if there was no left wall. If Cricket 1 decided that the robot should turn, the robot turned and then went forward a specified distance using corner-forward, as described in Section 3.1.2. If Cricket 1 decided that the robot should go forward, the robot simply went forward, using the corner-forward procedure for the same reasons.
We initially encountered some problems with the decide procedure being called when the robot was only halfway into an intersection. In this case, a wall to the front would not yet be close enough for Cricket 1 to recognize it as a front wall. Cricket 1 therefore often made the decision to go forward. Corner-forward, however, did not check for an approaching wall because it assumed that an appropriate decision had been made based on accurate information. Therefore, when the robot went forward in this case, it ran into the wall halfway through corner-forward. We implemented two measures to ensure that this situation would not occur.
The first preventative measure that we implemented was to initially set the variable decision to 4, so that if decide left decision equal to its default value, the value 4 would indicate that something had gone amiss and the situation needed to be reevaluated, rather than the original default value of 0 indicating that the robot should go forward. Although decide should have always reset decision, it was possible for decide to leave decision unaltered, if fwall were true but neither rdoor nor ldoor were true. The second preventative measure was the prepare-decide procedure. This procedure evaluated the position of the robot when the decide procedure was called, and then made any adjustments necessary to ensure that the robot was positioned properly in the intersection before making a decision. For the robot to be considered positioned properly, it needed to be fully into the intersection and oriented in a parallel/perpendicular manner to the walls of the maze. When prepare-decide was called, the variables fwall, ldoor, and rdoor had already been set based on the readings of the front sensor, front left sensor, and front right sensor. For example, ldoor was true if the front left sensor detected a doorway, regardless of the back left sensor value. The robot then started going forward. Prepare-decide stopped the robot once it was all the way into the intersection, which was determined by the back side sensor indicating a doorway.
to prepare-decide
both-wheels thisway on
ifelse (rdoor)
[loop
[get-sensors-special
if (fwall)
[stop]
if (brdist < 120)
[stop]]]
[if (ldoor)
[loop
[get-sensors-special
if
(fwall) [stop]
if
(bldist < 100) [stop]]]]
end
Once both side sensors detected the doorway, control was passed back to the main decision-making procedure, which then it evaluated the situation and made a decision. This ensured that the robot was all the way into the intersection before making a decision and turning. The robot also needed to be oriented properly in the intersection, but this was handled by the align procedure and is described in Section 3.1.4.
The thresholds used in the code above were not abstracted into global variables because of limitations on the number of global variables allowed in CricketLogo. This and other features of Crickets and CricketLogo are discussed in Appendix B.
The robot often found itself in problematic situations. The robot should have always been oriented in a parallel/perpendicular fashion to the walls. On a practical level, however, it often ended up askew. Ultimately, not being on a parallel course resulted in the robot running into a wall, so these situations were to be avoided, and if encountered, rectified as quickly as possible. We therefore implemented a number of escape clauses and prevention clauses, some of which have been discussed in previous sections.
The robot often tried unsuccessfully to align for a long period of time. Sometimes it was almost aligned, so as minor as the tweaks were, tweaking left turned it too far to the left, while tweaking right in response turned it too far to the right again. In an attempt to fix this problem, we altered the tweak procedures such that the tweak-right procedure turned the robot to the right slightly more than the tweak-left procedure turned it to the left. Since the tweak procedures no longer adjusted the robot to the same degree, the robot no longer tweaked back and forth indefinitely if the aligned orientation were located exactly halfway through a tweak. We made the right tweak longer because the robot tended to veer to the left rather than to the right, so it would soon correct the slightly longer right tweak with its normal veering. We also included a time-out clause in the align procedure. If the robot tried to align for a certain amount of time and had not yet succeeded, it stopped trying. Instead, it went forward briefly and then tried to align again.
Other times, it tried to align when there was not a wall within range. This happened if, in the process of aligning, the robot tweaked its way into an intersection, and occasionally happened under other extremely random circumstances (generally non-reproducable, which is why we will not explain them all here). We altered the align procedure to handle this situation. If, while in align, one of the front side sensors detected a doorway, the robot backed up into the hallway from which it had come. It then aligned itself to the left wall, which was the same wall that it had been following prior to reaching the intersection. After aligning, the robot went forward a specified distance, similar to the corner-forward procedure described in Section 3.1.2, to position itself properly in the intersection.
The robot sometimes got so close to the wall that the side of the robot was stuck against the wall. In this situation, the robot could not turn at all (primarily because of the length of the robot), so the jog procedures were unable to remedy the problem. We did not have time to implement an escape clause for this situation, but the robot proved to be suprisingly good at getting itself away from the wall; it was powerful enough to push itself off the wall most of the time. We did, however, implement measures to prevent this situation from occuring. We narrowed the range that kept the robot in the middle of the hallway, and we had it check whether or not the robot was in this range more often. These preventative measures worked almost all the time, particularly in the final stages of the code, so the robot almost never got stuck in this manner toward the end of the project.
Finally, the robot sometimes went forward not detecting that there was an approaching wall, and ran into the wall. Despite the apparent simplicity of this problem, writing an escape clause was complicated by the fact that the bathroom sensors were not accurate for very short distances. Therefore, when the robot was up against a wall in front, the value of the front bathroom sensor usually indicated that the wall was several inches away. This situation rarely occurred when the code was working properly, so it was usually a sign that something was wrong with the code. Because of this, we did not make the time to implement an escape clause for this situation because fixing the code generally fixed the problem of running into a wall. Nonetheless, it would have been a good escape clause to add so that if the robot had gotten into that situation, it could have backed up and kept going.
3.2. Cricket 2: Room Navigation and Extinguishing
Cricket 2 was responsible for four main components: determining whether the candle was in a room, approaching the candle, positioning the robot properly in front of the candle, and extinguishing the candle. We were not able to integrate all of the components of Cricket 2. Extensive testing of the extinguishing procedure were done to ensure the safety of the method and to find the optimal distance from which to extinguish the candle. However, due to time restraints, we were not able to integrate the extinguishing procedure with the procedure by which the robot approached the candle.
3.2.1. Determing the Presence of the Candle
The two candle sensors were connected to Cricket 2, and their values indicated whether or not the candle was in the room. If the sensor values were above a certain threshold, determined experimentally (inroom in the code below), the candle was in the room. If the sensor values were below that threshold, the candle was not in the room. The following simple method checked whether or not the candle was in a given room:
to check-room
ifelse (sensora > inroom and sensorb > inroom)
[output 1]
;candle in
room
[output 0]
;candle not in
room
end
Once Cricket 2 determined that the candle was in the room, the robot zig-zagged toward the candle, trying to keep the left and right candle sensor readings relatively equal. If the sensor readings were within a certain range of each other, they were considered to be "equal," indicating that the robot was pointed at the candle. Since the motors controlling the wheels were connected to Cricket 1, Cricket 2 directed the robot toward the candle by sending signals to Cricket 1. Cricket 2 first signalled for both wheels to be turned on, and the robot started moving forward. As long as the values of the candle sensors were not equal, Cricket 2 signalled for one wheel to be turned on while the other was left on, turning the robot and creating the zig-zag motion. Cricket 2 signalled Cricket 1 to stop the advance of the robot when Cricket 3 sent the signal indicating that the robot had crossed the white line that was 12" from the candle.
to zigzag
send 6
;go forward
loop [get-sensors
if (leftir > rightir)
[send
4]
;turn left
if (rightir > leftir)
[send
5]
;turn right
if (newir?)
[if (ir =
3)
;within 12" of candle
[send
7
;turn wheels off
align-candle
;final
alignment
send
16
;stop Cricket 1
extinguish]]
wait 2]
end
Upon entering a room, Cricket 3 interrupted the wall-following procedure of Cricket 1 and signalled Cricket 2 to check the room for the presence of the candle. At this point, the reliability of Cricket 2's decision often depended upon the placement of the candle in the room. In some situations, the robot needed to make a full 90° turn in order to be pointing at the candle. When the angle was of such magnitude, Cricket 2's decision was less reliable, and the robot occasionally turned in the wrong direction.
The addition of black tubing around the sensors increased the reliability and accuracy of the candle sensors, thereby increasing the reliability of Cricket 2's decisions regarding the location of the candle. The tubing restricted the range from which the sensors received IR light, thus lowering their readings. Without the tubing, both candle sensors read the maximum value in their range before the robot was close enough to the candle to extinguish it. Restricting the receiving range and lowering the sensor readings resulted in meaningful values of the candle sensors at a closer range than was possible without the tubing. Additionally, the tubing increased the likelihood that the sensor readings were affected only by the IR light from the candle, as opposed to any ambient IR light, thereby increasing the accuracy of the readings.
We included an escape clause that stopped the robot if the sensor readings became too high. The robot should have sensed that it had crossed the white line around the candle and stopped before the sensor readings ever crossed this threshold, but we added this clause as a safeguard, to ensure that our robot would never run into a lit candle. Fortunately, we never encountered this problem.
3.2.3. Positioning the Robot in Front of the Candle
When the robot stopped, it usually was not pointed directly at the candle, since it had been zig-zagging toward it up to that point. The robot needed to be pointed directly at the flame in order to extinguish it. This alignment used pivoting (both wheels on, in opposite directions), rather than simple turning (one wheel on with the other wheel off), so that the robot did not get any closer to the candle. After this final alignment, the robot was ready to extinguish.
At this point, it was assumed that the robot was aligned properly in front the candle, 5"6" away from the flame. The contest specifications stipulate that the height of the flame will be between 6" and 8" tall. Given this information, we determined experimentally that the robot was most likely to extinguish the candle if it was 5"6" away.
3.2.4. Extinguishing the Candle
To extinguish the candle, the gear rack of the pushing mechanism was pushed down for a set amount of time, and then recoiled by reversing the direction of the motor for a set amount of time:
to extinguish
a, thisway on
;gear rack going down
wait 200
a, thatway on
;gear rack going up
wait 200
a, off
end
3.3. Cricket 3: Managing Control Flow
Cricket 3 managed the control flow at the highest level with IR signals. Connected to it was an IR sensor pointed at the floor, which distinguished between white and black surfaces. By checking this sensor value continually, Cricket 3 determined when the robot crossed a white line on the floor, indicating that the robot had entered the room or, if it was already in a room, that it had come within 12" of the candle. When Cricket 3 determined that the robot was crossing a white line and entering a room, it sent an IR signal to Crickets 1 and 2. This signal prompted Cricket 1 to stop navigating through the hallways and relinquish primary control. This same signal prompted Cricket 2 to take primary control and check whether or not the candle was in that room. If the candle was in the room, Cricket 2 retained primary control, sending signals to Cricket 1 to manipulate the robot. If the candle was not in the room, Cricket 2 sent an IR signal to Crickets 1 and 3 indicating that the robot should return to the hallway, and then Cricket 2 relinquished primary control. Cricket 1 then took primary control again.
Cricket 3 also received an IR message from Cricket 1 every time that Cricket 1 decided to make a turn. If Cricket 1 decided to turn right, it sent one signal (an 8, in this case); if it decided to turn left, it sent a different signal (a 9). Cricket 3 stored the most recent turn message in a variable. When the robot exited a room after Cricket 2 determined that the candle was not there, Cricket 3 sent a message to Cricket 1 with the information about its most recent turn. Since the most recent turn had been the turn that brought the robot into the room, Cricket 1 could have the robot simply retrace this turn to continue going in the direction it had been going before stopping in the room. However, Cricket 1 did first test to see if continuing in the same direction was feasible. If the robot was at the end of a hallway and could not continue in the direction it had been going, Cricket 1 evaluated the situation as it would any new intersection and made an appropriate decision.
Click here to see the design of the maze. This design was listed on the web site of the Trinity College Fire-Fighting Home Robot Contest and was available to all contestants prior to the competition.
Click here for our notes on the "quirks" of Crickets and "features" of Cricket Logo.
Appendix C: Control Flow Diagram
Click here to see a visual representation of the control flow between the three Crickets.
Click here to see the code for Cricket 1, which controlled the hallway navigation.
Click here to see the code for Cricket 2, which controlled the robot as it approached the candle and extinguished the flame.
Click here to see the code for Cricket 3, which managed the control flow.