Researchers at the Ecole Polytechnique Fédérale de Lausanne programmed robots to move around an area, looking for particular rings designated as food, and avoid others designated as poison. Whenever they found food, they were programmed to flash a light. This light attracted the other robots, leading them toward the food source. When the program was altered to give the robots a measure of autonomy, they gradually ceased to flash their lights and alert their competitors that they had found food. Here's the abstract of the journal article:
Reliable information is a crucial factor influencing decision-making, and thus fitness in all animals. A common source of information comes from inadvertent cues produced by the behavior of conspecifics. Here we use a system of experimental evolution with robots foraging in an arena containing a food source to study how communication strategies can evolve to regulate information provided by such cues. Robots could produce information by emitting blue light, which other robots could perceive with their cameras. Over the first few generations, robots quickly evolved to successfully locate the food, while emitting light randomly. This resulted in a high intensity of light near food, which provided social information allowing other robots to more rapidly find the food. Because robots were competing for food, they were quickly selected to conceal this information. However, they never completely ceased to produce information. Detailed analyses revealed that this somewhat surprising result was due to the strength of selection in suppressing information declining concomitantly with the reduction in information content. Accordingly, a stable equilibrium with low information and considerable variation in communicative behaviors was attained by mutation-selection. Because a similar co-evolutionary process should be common in natural systems, this may explain why communicative strategies are so variable in many animal species.
Although not directly related to the flesh-eating robot program, I'm sure that robots able to use humans for fuel would prefer to lie about their intentions.
Link via OhGizmo!
They’re not telling lies."
I'm dubious that this is anything more than a sock puppet show between programmers. Any program still necessarily reflects the intent of the programmer. When a programmer arrives at a solution to a problem that its programmer didn't specifically foresee, *then* I'll be impressed. Until then - at ease. Singularity averted.
No, not really. It's still just a program written by programmers. They told it to sometimes not alert the other robots, and it clocked that there was more food available.
They're not telling lies.