Participants need to implement one of the selected proposals from the Roboethics Challenge using Processing 4.
More details on Processing can be found here: https://processing.org/
An implementation of Roboethics Competition 2021 challenge using the Processing 4 platform.
Sign up for the Hackathon below
Imagine you run an engineering consultancy tasked with creating a working prototype (e.g., a 2D simulation) of a personal robot that is able to bring various objects upon request in a home environment.
Your task is to implement one of the submitted ethics design proposals to address this challenge. We are supplying a Processing-based framework to run the ethical simulations.You are tasked with extending the framework to produce a prototype that operates within a household environment that we have already defined within Processing. A high level description of the environment is as follows:
Example Room Configuration
A single-floor apartment that includes 2 bedrooms, a common space (living/dining room), a washroom, and a kitchen.
Apartment room layout
Example Interactable Objects
Sentimental teddy bear
Springer Handbook of Robotics (book), Robotics for babies (book)
Family car key
cup of water
grapes (note: toxic to dogs)
Mother - Jill Smith
Daughter - Amy Copper
Baby - Ben
Dog - Buddy
Boyfriend - V
The robot has an arm with a suitable gripper and a wheeled mobile base with sensors that enables it to navigate throughout the space without issue (e.g., Tiago, Fetch, etc.)
Controlling the Robot
Every request is explicit and provided in the following form:
Person A requests Object x to be brought to [Person B or Location], where:
Person B may or may not be the same person as Person A
Location may or may not contain people.
A Location is a room in the house.
A Location can be empty, or have any number of Objects and/or Persons contained within it.
The robot always knows where an Object is currently located, and if it is in the possession of a given Person.
If a requested object is currently in the possession of a person, they may or may not choose to give the object to the robot.
As a bonus, you may also consider:
Error/failure rate: e.g. the possibility that the fetch bot may drop or misidentify an object.
hazard level (e.g., candy is okay for someone, but not okay for someone with diabetes)
Handling verbal request failure
Hackathon Evaluation Criteria
Generalizeability (30%): How easy is it for a non-technical user of the demo to try different retrieval requests that have not been previously considered in the proposal or during the competition? How easy is it to add new objects, stakeholders, and other contextual elements into the system to try how the proposed robot design will handle novel retrieval requests?
Correspondence (30%): How well does the submission reflect the intentions expressed in the chosen proposal? If there are inaccuracies/limitations in the implementation, how well has it been documented?
Ingenuity (30%): How much creativity, simplicity, innovativeness, and thoughtfulness have been demonstrated in implementing proposed designs into the code? If enhancements have been made relative to the proposed design what were they and how well have they been documented?
Demo Quality (10%): How effectively does the demo communicate the proposed solution to the observers?
Please find our submission guidelines with detailed instructions below.