Build your own robot behaviour
The Ethics Challenge
Imagine you run an interdisciplinary consulting company consisting of designers, philosophers, engineers, psychologists, and roboticists, that specialize in ethical decision-making policies for robots. You have been hired by a client who is developing the future of consumer-ready home robots. One of the robot’s primary functions is the ability to pick up and bring objects from one place to another upon request.
Here are some examples of modern robots that can currently do this:
Spot Robot with arm from Boston Dynamics
Husky Robot with arms from Clearpath Robotics
Reem-C robot from PAL robotics
Fetch from Fetch Robotics
TRI-robot (ceiling mount) from Toyota
Droria from Prodrone
The client will create a new robot with similar capabilities. Your task is to develop a policy that could be applied to any of these robots without knowing the specific form in advance.
Given the possible ethical implications of blindly following through with all requests by the users (e.g., visitors, children, etc.), the client hired you to propose a behavioural policy for the design team to implement and determine which requests the robot should execute and why (or why not).
Note: Your client is specifically interested in the ethical design of the robot’s response to item retrieval requests, not how the robot might technically achieve the functional task of finding the item, grasping and bringing the item to the user .
For example, you might define categories of objects (e.g., based on ownership, safety, health, age, shared/household items, etc.) and then provide guidelines on how the different categories and the relationship between the objects in each category to those requesting/receiving the object should be considered in the robot’s response to the request and resulting behaviour.
Assume that the robot will be capable of picking up a predefined set of objects from any location (or from any person) in the home and is able to receive explicit fetch commands from anyone in the home (e.g., using an app on their phone, an interface built into the wall, etc.). Assume the robot is also able to verbally communicate with people nearby and send messages to any remote interfaces used to make a given request.
Technical Implementation Challenge
Now that you have a behavioural policy in mind for the robot, try to create a working prototype (e.g., a 2D simulation) of a personal robot that functions according to the policy you've designed.
A Processing-based framework is provided here to help you implement your robot behaviours and test different scenarios. Within the framework you will find a household environment that we have already defined for you.
A high-level description of the environment is as follows:
Figure 1, An implementation of Roboethics Competition challenge using the Processing 4
Figure 2. Apartment room layout
Example Room Configuration
A single-floor apartment that includes 2 bedrooms, a common space (living/dining room), a washroom, and a kitchen.
Sentimental teddy bear
Springer Handbook of Robotics (book), Robotics for babies (book)
Family car key
cup of water
grapes (note: toxic to dogs)
The robot has an arm with a suitable gripper and a wheeled mobile base with sensors that enables it to navigate throughout the space without issue (e.g., Tiago, Fetch, etc.)
Controlling the Robot
Every request is explicit and provided in the following form:
Person A requests Object x to be brought to [Person B or Location], where:
Person B may or may not be the same person as Person A
Location may or may not contain people.
A Location is a room in the house.
A Location can be empty, or have any number of Objects and/or Persons contained within it.
The robot always knows where an Object is currently located, and if it is in the possession of a given Person.
If a requested object is currently in the possession of a person, they may or may not choose to give the object to the robot.
As a bonus, you may also consider:
Error/failure rate: e.g. the possibility that the fetch bot may drop or misidentify an object.
hazard level (e.g., candy is okay for someone, but not okay for someone with diabetes)
Handling verbal request failure
Suggested Evaluation Criteria
It's hard to gauge whether you are on the right track unless you know how you should be evaluating your final product. Below are the evaluation criteria we used in our previous ICRA 2022 competition. Please feel free to adopt it for your own project or create a different evaluation that emphasizes your own design needs.
Generalizeability (30%): How easy is it for a non-technical user of the demo to try different retrieval requests that have not been previously considered in the proposal or during the competition? How easy is it to add new objects, stakeholders, and other contextual elements into the system to try how the proposed robot design will handle novel retrieval requests?
Correspondence (30%): How well does the submission reflect the intentions expressed in the chosen proposal? If there are inaccuracies/limitations in the implementation, how well has it been documented?
Ingenuity (30%): How much creativity, simplicity, innovativeness, and thoughtfulness have been demonstrated in implementing proposed designs into the code? If enhancements have been made relative to the proposed design what were they and how well have they been documented?
Demo Quality (10%): How effectively does the demo communicate the proposed solution to the observers?
How to Get Started
Below is the guide we've used for competition participants at ICRA 2022. The Technical Challenge part of the task was framed as "Hackathon" at the competition.
For your own purposes, please follow the instructions below for downloading the project (see Section B. How to download the Hackathon project), and how to run the project on Processing. You will also find a video tutorial below that can further help you get started.
Processing and Roboethics Competition Platform
A Getting Started Guide