- Surprise-Driven Belief Revision 2007-04-10
Lorini, E., Castelfranchi, C. (2006); In Proceedings of Second Biennial Conference on Cognitive Science, St. Petersburg, 9-13 June, 2006.
- Using Particle Filters to Anticipate the Location of Reappearance of a Temporarily Hidden Target 2007-06-01
- Empirical Analysis of Generalization and Learning in XCS with Gradient Descent 2007-10-30
Lanzi P. L., Butz M. V., Goldberg D. E. (2007). GECCO 2007: Genetic and Evolutionary Computation Conference. 1814-1821.
- Explorations of anticipatory behavioral control (ABC): A report from the cognitive psychology unit of the University of Würzburg 2007-10-30
(2007). Cognitive Processing, 8, 133-142.
- Mapping neurological disease 2012-09-05
- Disorders such as schizophrenia can originate in certain regions of the brain and then spread out to affect connected areas. Identifying these regions of the brain, and how they affect the other areas they communicate with, would allow drug companies to develop better treatments and could ultimately help doctors make a diagnosis. But interpreting the vast amounts of data produced by brain scans to identify these connecting regions has so far proved impossible.
Now, researchers in the Computer Science and Artificial Intelligence Laboratory at MIT have developed an algorithm that can analyze information from medical images to identify diseased areas of the brain and their connections with other regions.
The MIT researchers will present the work next month at the International Conference on Medical Image Computing and Computer Assisted Intervention in Nice, France.
The algorithm, developed by Polina Golland, an associate professor of computer science, and graduate student Archana Venkataraman, extracts information from two different types of magnetic resonance imaging (MRI) scans. The first, called diffusion MRI, looks at how water diffuses along the white-matter fibers in the brain, providing insight into how closely different areas are connected to one another. The second, known as functional MRI, probes how different parts of the brain activate when they perform particular tasks, and so can reveal when two areas are active at the same time and are therefore connected.
These two scans alone can produce huge amounts of data on the network of connections in the brain, Golland says. “It’s quite hard for a person looking at all of that data to integrate it into a model of what is going on, because we’re not good at processing lots of numbers.”
So the algorithm first compares all the data from the brain scans of healthy people with those of patients with a particular disease, to identify differences in the connections between the two groups that indicate disruptions caused by the disorder.
However, this step alone is not enough, since much of our understanding of what goes on in the brain concerns the individual regions themselves, rather than the connections between them, making it difficult to integrate this information with existing medical knowledge.
So the algorithm then analyzes this network of connections to create a map of the areas of the brain most affected by the disease. “It is based on the assumption that with any disease you get a small subset of regions that are affected, which then affect their neighbors through this connectivity change,” Golland says. “So our methods extract from the data this set of regions that can explain the disruption of connectivity that we see.”
It does this by hypothesizing, based on an overall map of the connections between each of the regions in the brain, what disruptions in signaling it would expect to see if a particular region were affected. In this way, when the algorithm detects any disruption in connectivity in a particular scan, it knows which regions must have been affected by the disease to create such an impact. “It basically finds the subset of regions that best explains the observed changes in connectivity between the normal control scan and the patient scan,” Golland says.
When the team used the algorithm to compare the brain scans of patients with schizophrenia to those of healthy people, they were able to identify three regions of the brain — the right posterior cingulate and the right and left superior temporal gyri — that are most affected by the disease.
In the long term, this could help drug companies develop more effective treatments for the disease that specifically target these regions of the brain, Golland says. In the meantime, by revealing all the different parts of the brain that are affected by a particular disorder, it can help doctors to make sense of how the disease evolves, and why it produces certain symptoms.
Ultimately, the method could also be used to help doctors diagnose patients whose symptoms could represent a number of different disorders, Golland says. By analyzing the patient’s brain scan to pinpoint which regions are affected, it could identify which disorder would create this particular disruption, she says.
In addition to schizophrenia, the researchers, who developed the algorithm alongside Marek Kubicki, associate director of the Psychiatry Neuroimaging Laboratory at Harvard Medical School, are also investigating the possibility of using the method to study Huntington’s disease.
Gregory Brown, associate director of clinical neuroscience at the University of California at San Diego’s Center for Functional MRI, who was not involved in developing the model, plans to use it to study the effects of HIV and drug addiction. “We will use the method to gain a clearer perspective on how HIV infection and methamphetamine dependence disrupts large-scale brain circuitry,” he says.
The method is a critical step away from studying the brain as a collection of localized regions toward a more realistic systems perspective, he says. This should assist the study of disorders such as schizophrenia, neurocognitive impairment and dementia associated with AIDS, and multiple sclerosis, which are best characterized as diseases of brain systems, he says.
- Teaching robots lateral thinking 2013-02-25
- Many commercial robotic arms perform what roboticists call “pick and place” tasks: The arm picks up an object in one location and places it in another. Usually, the objects — say, automobile components along an assembly line — are positioned so that the arm can easily grasp them; the appendage that does the grasping may even be tailored to the objects’ shape.
General-purpose household robots, however, would have to be able to manipulate objects of any shape, left in any location. And today, commercially available robots don’t have anything like the dexterity of the human hand.
At this year’s IEEE International Conference on Robotics and Automation, students in the Learning and Intelligent Systems Group at MIT’s Computer Science and Artificial Intelligence Laboratory will present a pair of papers showing how household robots could use a little lateral thinking to compensate for their physical shortcomings.
One of the papers concentrates on picking, the other on placing. Jennifer Barry, a PhD student in the group, describes an algorithm that enables a robot to push an object across a table so that part of it hangs off the edge, where it can be grasped. Annie Holladay, an MIT senior majoring in electrical engineering and computer science, shows how a two-armed robot can use one of its graspers to steady an object set in place by the other.
Most experimental general-purpose robots use a motion-planning algorithm called the rapidly exploring random tree, which maps out a limited number of collision-free trajectories through the robot’s environment — rather like a subway map overlaid on the map of a city. A sophisticated-enough robot might have arms with seven different joints; if the robot is also mounted on a mobile base — as was the Willow Garage PR2 that the MIT researchers used — then checking for collisions could mean searching a 10-dimensional space.
Add in a three-dimensional object with three different axes of orientation, which the robot has to push across a table, and the size of the search space swells to 16 dimensions, which is too large to search efficiently. Barry’s first step was to find a concise way to represent the physical properties of the object to be pushed — how it would respond to different forces applied from different directions. Armed with that description, she could characterize a much smaller space of motions that would propel the object in useful directions. “This allows us to focus the search on interesting parts of the space rather than simply flailing around in 16 dimensions,” she says. Finally, after her modification of the motion-planning algorithm, she had to “make sure that the theoretical guarantees of the planner still hold,” she says.
By contrast, Holladay’s algorithm in some sense inverts the ordinary motion-planning task. Rather than identifying paths that avoid collisions and adhering to them, it identifies paths that introduce collisions and seals them off. If the robot is using one hand to set down an object that’s prone to tipping over, for instance, “I might look for a place for the other hand that will block bad paths and kind of funnel the object into the path that I want,” Holladay says.
Like Barry, Holladay had to find a simple method of representing the physical properties of the object the robot is manipulating. In addition to the placement of tall, tippy objects, her algorithm can also handle cases in which the robot is setting an object on a table, but the object sticks to the rubber sheath of the robot’s gripper. With Holladay’s algorithm, the robot can use its free gripper to prevent the object from sliding as it withdraws the other gripper.
Both Barry and Holladay allow modification of their algorithms, through application programming interfaces that would allow other researchers to plug in parameters describing the physical behavior of new types of objects. But the ultimate goal is for the robot itself to infer the relevant properties of objects by lifting, shoving, or otherwise manipulating them.
Nor are the researchers concerned that hardware improvements will render their algorithmic research obsolete. “The thought is that we’re unlikely to get hands that are as flexible and dexterous as human hands, and even if we did, it would be hard to figure out the AI and planning for those,” Barry says. “So we’ll always have to think about interesting ways to grasp things.”
“You see a lot of demos where a robot might do something like slide plates, but it’s usually hard-coded for the demo: The robot knows that at this point, it needs to do this action for this particular thing,” says Kaijen Hsiao, a research scientist and manager at Willow Garage, the company that manufactures the PR2. Barry and Holladay’s research, by contrast, is “a framework for incorporating behaviors like that as a more general motion-planning problem,” she says. “Which is a very difficult thing, because it’s very high-dimensional. I think it’s really important research, and it’s very novel.”
- Toward a Perceptual Symbol System 2007-11-15
G.Pezzulo, G. Calvi – Proceedings of the Sixth International Conference on Epigenetic Robotics (EPIROB 2006).
- The unexpected aspects of Surprise 2007-04-10
Lorini, E., Castelfranchi, C. (2006); International Journal of Pattern Recognition and Artificial Intelligence, 20 (6), pp. 817-835.