Pinned Project Overview Post

Project Overview

Introduction There have been numerous researches conducted for the development of algorithms that can be used for environment exploration an...

Sunday, October 1, 2023

[Post :10] NetLogo Simulation Progress Part - 3

I managed to implement all the functionalities I was thinking of in the beginning. However, there are some limitations in the simulation. I uploaded the project to GitHub so anybody can find the implementation-related information. [Github Link]

By default, the NetLogo floor is wrapped. That means when an agent goes against an edge instead of colliding with it, the agent emerges from the other side. From agents’ perspective, they have an infinitely large area to roam around. This feature is not going to be used in my implementations as the agents are supposed to work in environments with limits. This feature can be easily disabled in the program, but the edge cases are needed to be handled or otherwise agents will collide with edges and stick to it. Since obstacle avoidance was already done in the program, the same behavior is mirrored to deal with edges. The only difference is how agents detect obstacles and edges.

In order to detect obstacles, agents check for yellow color patches in the neighborhood. The edges are detected using the ‘patch-ahead’ primitive as they return the ‘nobody’ value when the agent can't detect any patch ahead of it when it is near an edge.



In the obstacle detection part, you can see the agent changes its color. As discussed in the previous post, this is due to the freedom it has at each moment.

The agents are supposed to follow an agent if that has more freedom than the freedom of self. In order to follow an agent, the current agent will turn towards the agent with the most freedom while maintaining a minimum distance between them. The code can be looked at in the GitHub repository to further understand the code.


Since the code can be looked at to understand the functionality in detail, I will focus on the GUI elements built to change the simulation conditions.

 

Custom Agents placement

When the agent count is set and initialized, they will spawn randomly all around the floor. If obstacles are already placed, it will check for the status of a patch before placing. If the decided patch is already occupied by an obstacle, it will select another random position. This behavior might not be suitable for some applications. To avoid this problem, 2 buttons were placed to place and delete agents using the mouse cursor.

Custom Obstacle Placement

Similar to agent placement, this feature can be used to draw obstacles on the floor. The only difference is that the cursor needs to be dragged on the floor while holding a click. The agent function will only place a single agent regardless of the mouse button holding.

Marking Explored Area

A toggle switch was placed on the GUI to enable or disable the coloring of the explored area. When an agent passes through a patch, the agent will change the color of the patch to blue. While this needs to be turned on to update the graph which shows the explored region, at the same time it slows down the simulation speed a bit.

Loading an existing obstacle layout

The coordinates of the NetLogo floor are different in default settings. 0,0 coordinate is in the middle of the floor. The default convention is crucial to know when loading an obstacle layout as a previously saved coordinates list will be completely different in another coordinate system. 

Drawing obstacles in specific shapes takes a long time and might not be practical to repeat the same in every run. So the save obstacle pattern can be used to save the layout. This will save the coordinates of the yellow patches into a '.txt' file which will be loaded back when clicked on 'load obstacle pattern'


I will publish the last post of this project with  a demo video, the report, and the GitHub link along with the comments I received from the panel

Tuesday, September 12, 2023

[Post :09] NetLogo Simulation Progress Part - 2

In addition to the links I shared in the previous post, I found this super useful document to learn about NetLogo programming. 

In the simulation, I managed to implement the following behaviors in the agents. In my simulation, I refer to agents as bots. The bots are capable of doing the following tasks,

  • Can detect if the target enters into the vision range
  • Can calculate freedom (what freedom is that?)
  • Can detect the agent with maximum freedom in the local region
  • Can follow a specific bot
  • Can avoid colliding with obstacles in the environment
  • Can detect the edges of the world and turn around 
Let me tell you what each of the mentioned features does and how I managed to implement them.

Detecting targets in the vision range
Before detecting the target, we need to specify the vision range of the bot. The vision range is going to be a circular region around the bot. This is implemented by using the 'in-radius' primitive. The primitive requires the radius as a parameter. in the code, it looks something like this.


Visual representation of the code


The good thing with NetLogo is it is easy to understand written code. The bad thing is it is not easy to write the code to do something I want as it is required to format it to comply with NetLogo rules.

A GUI slide is added to the main interface named 'vision-distance' so that it can be used to adjust and fine-tune the radius value for best performance



Calculating freedom
The algorithm uses a value called 'freedom score' to identify the amount of free space a bot has in its local environment to move around. Whenever there are other agents in the local area, they use this value to decide whether they need to follow someone or move on their own. While it is best to use proximity sensor reading for this purpose which gives a distance value, in NetLogo, a primitive called 'neighbors' is used. This primitive reports the 8 patches around the agent. (The floor area in NetLogo is divided into patches). Accordingly, if there is an obstacle in the neighborhood with the color yellow, it causes the freedom value to be reduced by 1 unit 

Visual representation of freedom score and an example where freedom = 5

Once the freedom score is calculated, in order to convey this score to others, agents use their color. While the base color of a bot is green, depending on the freedom, this color takes different shades. Accordingly, if an agent has maximum freedom of 8, its color is a brighter green. Any agent with a freedom score of 5 or lower will take a darker shade of green. The color representations are as follows,




Obstacle avoidance and world edge detection
The obstacles in the environment are represented in yellow color. Hence agents check for any yellow color patches in front of it in a cone-shaped vision range. To detect the edges of the world, agents use a primitive called 'patch-ahead'. While this primitive reports the patch matches the filtering criteria, it reports the 'nobody' value if the patch being checked lies outside the world edge. In both cases, the agent takes a random maneuver to avoid moving forward and hitting an obstacle or edge.


Following an agent

3 rules are enforced to be followed by agents when following another. 
  • The distance between 2 agents should not exceed a maximum value
  • The distance between 2 agents should not be below a minimum value
  • A cone-shaped region in front of the agent should be empty
These 3 rules combined with freedom score logic, make the whole swarm split into several groups and explore different regions. This helps to explore the area much quicker.

In order to interact with the simulation area, additional procedures are to be developed. These procedures will give the user to ability to place or remove bots on the floor using a mouse, add/remove obstacles, save or load an obstacle map, etc. I will post another article with that information.



Sunday, September 3, 2023

[Post :08] NetLogo Simulation Progress

When you open the NetLogo, The first interface you see is the simulation area. In order to access the coding area you need to click on the 'code' tab which is located just under the menu bar.

There are 5 types of entities in the NetLogo world; Obsever, Turtles, Patches, Links, and Utilities. All the built-in functions (they are called primitives) fall under one of the five mentioned above. Click on this link to go to the interactive primitives dictionary of NetLogo. Before jumping right into the coding just like me and getting frustrated, I recommend you to refer to these 3  links; What is NetLogo?What is a primitive?The first 11 primitives to learn  

Compared to programming in a language like Java, there are differences in building simulations in NetLogo. The main difference I noticed is instead of curly brackets, square brackets are being used when using conditionals. When using the keyword 'self', we are not referring to the turtle itself, but to the turtle, we are asking something to do. To refer to the agent itself the keyword 'myself' should be used. 
The best thing is, that whenever I place a button or a slider in the GUI area, I can refer to their values simply by mentioning the name of the element. 
There are several other important things to keep in mind when dealing with NetLogo programs I noticed so far (There should be more) but those can be found in the interactive dictionary.  

NetLogo refers to the agents in the simulations as turtles. If I need only one type of agent, I can simply go with the turtles. However, I need two types of agents in the simulation. In my case the bots who do the searching and the target which is being searched for. So need to create two types of turtles. I can do this by using 'breed' primitive. For each of these types, I can assign features and their own variables which will be used during the simulation. 

Here is the very first NetLogo program I created after hours of experimenting by myself and then going through the documentation. (There are some additional GUI elements as I am experimenting now) It simply creates 2 types of breeds. As of now the bots will just spawn in random places in the environment and move forward. The target stays in the middle. No goal detection or any other logic is implemented as of now.




Friday, September 1, 2023

[Post :07] Starting with NetLogo

 When I first saw the NetLogo program, my first impression was not a good one. Because it looks so old. But then I saw the simulations that have been developed using it I changed my mind. Once you install NetLogo, all these simulations are also going to be available under File-> Models Library (Ctrl+M).

NetLogo Interface


If installing NetLogo is not an option, it is possible to use the online version of NetLogo in the browser as well. Click this link to access the web version.


One feature that made me like NetLogo is the scripting language that needs to be used to develop the models (Initially I was trying to develop a simulator just for my simulation from scratch using Python. So I knew how hard it is to create a fully functioning simulator). When I was reading the code, it was almost reading like a sentence written in English. Because each word explained what it was supposed to do. Check the following sample line which can be used to find if there is a target around a specific agent.

So developing a model to simulate my algorithm is going to be far less complicated than I expected.
(PS: I was wrong)

Thursday, August 24, 2023

[Post :06] This is my new approach

 I came up with a new approach to exploring the environment without having a leader. Still, the agents will calculate the amount of room they have to move around, but instead of comparing with all the agents in the communication range to find the one who has more freedom, only the value of immediate agents will be compared. If one of them has a higher value, the agent will start following that agent. If not, the agent will calculate the next position to move on its own.

In this approach, there will be no leader who is going to guide or command all other agents. As a matter of fact, the swarm will split into several groups and wander in the environment on their own. Wait what?, Split?. Doesn't that break the swarm apart?. Yeah. Even I had the same question. But in swarm robotics, it's not about how close they are going to be when moving. It's about how they communicate and how they behave on their own. So it is possible not to have a clump of agents at all. Check out this link my supervisor sent me after sending him my previous approach.

The next update is that instead of doing the simulation in 'Webots', I'm doing it on a software named NetLogo. Here is the link to NetLogo home page. Check this link to see a page about flocking simulation using NetLogo (which is also sent to me by my supervisor).

Anyway, This is my new approach to searching for a target in an unknown environment.









Now that this approach has been approved by my supervisor, it is time to implement this using NetLogo. I will post more updates on NetLogo implementation progress as well.

Thursday, August 17, 2023

[Post :05] Here's what I did wrong

After going through many articles I started working on the algorithm itself. I thought of all the steps the swarm agents have to take and even came up with a flow chart to make it easier to understand. Then I showed it to my supervisor to get his opinion. He pointed out what went wrong in my approach. Before pointing out my mistake, I will show you my approach and the flow chart I came up with. (Try to figure out what is wrong here. I'll mention it at the end).











It looks good right?.... right? (no?. 😐 )

Did you spot the issue? (If so you are better than me)

The problem is ....... the "LEADER". There should not be a leader in a swarm. I mean come on. How could I miss that (Somehow I did. duh.). The whole point of swarm robotics is not to have a leader.

Source: https://knowyourmeme.com/memes/my-goodness-why-didnt-i-think-of-that

But let me tell you why I thought this was okay. In this approach, the leader changes all the time. It's not a permanent role. Even in 2 consecutive moves of the swarm, the leader might be two different agents. So I thought this is fine because the role is temporary and anybody can become a leader.

But at the end of the day, the swarm has had a leader at every point in time. So... yeah. I have to redo everything.

Friday, August 11, 2023

[Post :04] Specifications and constraints of the algorithm

First of all, I need to lay out the specifications and constraints to be considered during the development of the algorithm. 

Here is the list of specifications.

  • Agents can communicate only with other agents within a limited local communication range
  • Agents use Infrared emitters and receivers for the communication
  • Agents do not globally localize. Only local localization based on odometry and laser scan matching to have a sense of orientation when moving from one location to another
  • The swarm should be capable of detecting static and dynamic targets

Here is the list of constraints.

  • The environment will be unknown to agents
  • No centralized communication
  • No computationally heavy calculations
  • Limited communication range and bandwidth
In addition to these, some other limitations and features might be introduced in the future during the development. If so, They will be available in the final report which I will post on this blog as the final post.

The next step is to start building the algorithm itself. Of course, I won't be able to finalize all the steps in one go. There will be several iterations to get to the final list of activities in the algorithm. Starting from the next post, I'll create and post the flow charts that I come up with and the problems or required changes in them. By doing so the thought process of the development and the progress of the algorithm can be tracked.