Sunday, November 30, 2008

RE: US Military Expands 'Human' Killer Robot Soldiers

----------------- Bulletin Message -----------------
From: Anti-NWO
Date: Nov 29, 2008 11:00 PM


----------------- Bulletin Message -----------------
From: Eddie NWO Censored
Date: Nov 29, 2008 1:14 PM


Eddie NWO Censored













November 26, 2008 - Service robots, medical robots and now military robots.
This is the first ever Israeli robotics conference in Herzeliya, Israel, and the inventors aims are to get their innovations recognized - Israel has a thriving robotic industry and one of the devices being promoted is this




Israeli robots deals with dead Palestinians







Foster-Miller's MAARS new armed robot rolls out, with machine gun at the ready (already deployed in IRAQ TO KILL HUMANS)





*****************************************
Robot May be More 'Humane' Soldier
*****************************************

November 26, 2008
International Herald Tribune

http://www. military. com/news/article/robot-may-be-more-humane-soldier. html?col=1186032310810

In the heat of battle, their minds clouded by fear, anger or vengefulness, even the best-trained soldiers can act in ways that violate the Geneva Conventions or battlefield rules of engagement.
Now some researchers suggest that robots could do better

"My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can," said Ronald Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the U.S. Army.
"That's the case I make"

Robot drones, mine detectors and sensing devices are already common on the battlefield but are controlled by humans. Many of the drones in Iraq and Afghanistan are operated from a command post in Nevada.
Arkin is talking about true robots operating autonomously, on their own

He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that the advent of these robots on the battlefield is only a matter of time.
That means, they say, it is time for people to start talking about whether this technology is something they want to embrace

"The important thing is not to be blind to it," Arkin said

Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that "this is not a 'Terminator'-style science fiction but grim reality"

He said South Korea and Israel were among countries already deploying armed robot border guards.
In an interview, he said there was "a headlong rush" to develop battlefield robots that make their own decisions about when to attack

"We don't want to get to the point where we should have had this discussion 20 years ago," said Colin Allen, a philosopher at Indiana University and a co-author of "Moral Machines: Teaching Robots Right From Wrong," published this month by Oxford University Press

Randy Zachery, who directs the Information Science Directorate of the Army Research Office, which is financing Arkin's work, said the Army hoped this "basic science" would show how human soldiers might use and interact with autonomous systems and how software might be developed to "allow autonomous systems to operate within the bounds imposed by the warfighter"

"It doesn't have a particular product or application in mind," said Zachery, an electrical engineer.
"It is basically to answer questions that can stimulate further research or illuminate things we did not know about before"

And Lieutenant Colonel Martin Downie, a spokesman for the Army, noted that whatever emerged from the work "is ultimately in the hands of the commander in chief, and he's obviously answerable to the American people, just like we are"

In a report to the Army last year, Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear.
They can be built to show no anger or recklessness, Arkin wrote, and they can be made invulnerable to what he called "the psychological problem of 'scenario fulfillment,'" which causes people to absorb new information more easily if it agrees with their pre-existing ideas

Arkin's report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of U.S. Soldiers and Marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents.
More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior

Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled the dead were more likely to say they had mistreated civilian noncombatants, the survey said.
(The survey can be read by searching for 1117mhatreport at wwwglobal policy. org)

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield," Arkin wrote in his report, "but I am convinced that they can perform more ethically than human soldiers are capable of"

Arkin said he could imagine a number of ways in which autonomous robot agents might be deployed as "battlefield assistants" - in countersniper operations, clearing buildings of terrorists or other dangerous assignments where there may not be time for a robotic device to relay sights or sounds to a human operator and wait for instructions

But first those robots would need to be programmed with rules about when it is acceptable to fire on a tank, and about more complicated and emotionally fraught tasks, like how to distinguish civilians, the wounded or someone trying to surrender from enemy troops on the attack, and whom to shoot

In their book, Allen and his co-author, Wendell Wallach, a computer scientist at the Yale Interdisciplinary Center for Bioethics, note that an engineering approach "meant to cover the range of challenges" will probably seem inadequate to an ethicist.
And from the engineer's perspective, they write, making robots "sensitive to moral considerations will add further difficulties to the already challenging task of building reliable, efficient and safe systems"

But, Allen added in an interview, "Is it possible to build systems that pay attention to things that matter ethically? Yes"

Daniel Dennett, a philosopher and cognitive scientist at Tufts University, agrees. "If we talk about training a robot to make distinctions that track moral relevance, that's not beyond the pale at all," he said.
But, he added, letting machines make ethical judgments is "a moral issue that people should think about"

Sharkey said he would ban lethal autonomous robots until they demonstrate they will act ethically, a standard he said he believes they are unlikely to meet.
Meanwhile, he said, he worries that advocates of the technology will exploit the ethics research "to allay political opposition"

Arkin's simulations play out in black and white computer displays

"Pilots" have information a human pilot might have, including maps showing the location of sacred sites like houses of worship or cemeteries, as well as apartment houses, schools, hospitals or other centers of civilian life

They are instructed as to the whereabouts of enemy materiel and troops, and especially high-priority targets.
And they are given the rules of engagement, directives that limit the circumstances in which they can initiate and carry out combat The goal, he said, is to integrate the rules of war with "the utilitarian approach - given military necessity, how important is it to take out that target?"

Arkin's approach involves creating a kind of intellectual landscape in which various kinds of action occur in particular "spaces." In the landscape of all responses, there is a subspace of lethal responses.
That lethal subspace is further divided into spaces for ethical actions, like firing a rocket at an attacking tank, and unethical actions, like firing a rocket at an ambulance

For example, in one situation playing out in Arkin's computers, a robot pilot flies past a small cemetery. The pilot spots a tank at the cemetery entrance, a potential target. But a group of civilians has gathered at the cemetery, too. So the pilot decides to keep moving, and soon spots another tank, standing by itself in a field.
The pilot fires; the target is destroyed

In Arkin's robotic system, the robot pilot would have what he calls a "governor.
" Just as the governor on a steam engine shuts it down when it runs too hot, the ethical governor would quash actions in the lethal/unethical space

In the tank-cemetery circumstance, for example, the potentially lethal encounter is judged unethical because the cemetery is a sacred site and the risk of civilian casualties is high. So the robot pilot declines to engage. When the robot finds another target with no risk of civilian casualties, it fires.
In another case, attacking an important terrorist leader in a taxi in front of an apartment building, for example, might be regarded as ethical if the target is important and the risk of civilian casualties low

Some who have studied the issue worry, as well, whether battlefield robots designed without emotions will lack empathy. Arkin, a Christian who acknowledged the help of God and Jesus Christ in the preface to his book "Behavior-Based Robotics" (MIT Press, 1998), reasons that because rules like the Geneva Conventions are based on humane principles, building them into the machine's mental architecture endows it with a kind of empathy.
He added, though, that it would be difficult to design "perceptual algorithms" that could recognize when people were wounded or holding a white flag or otherwise "hors de combat"

Still, he said, "as the robot gains the ability to be more and more aware of its situation," more decisions might be delegated to robots.
"We are moving up this curve"

He said that was why he saw provoking discussion about the technology as the most important part of his work.
And if autonomous battlefield robots are banned, he said, "I would not be uncomfortable with that at all"








Boston Dynamics BigDog quadruped robots







Crusher (CMU's military Unmanned Ground ARMED Vehicle)







Foster-Miller Talon UGV


Predator drone that killed dozens of Pakistanis this year







The Oblique Flying Wing Darpa Kills Shape-Shifting, Supersonic Bomber

********************************************************************
US Army to Equip National Guard Unit With Future Combat System Aerial Robots
********************************************************************

PS Newswire
25 Nov 2008

http://www. earthtimes. org/articles/show/us-army-to-equip-national,636114. shtml

HUNTSVILLE, Ala., Nov.
25 /PRNewswire-USNewswire/ -- The Army's commitment to equipping its total force with Future Combat System (FCS) capabilities continues as the first Army National Guard unit -- the 56th Stryker Brigade Combat Team of the Pennsylvania National Guard begins training next month with the FCS-developed gasoline-powered Micro Air Vehicle (gMAV) prior to the unit's deployment to Iraq in January

The gMAV is a precursor technology to the FCS program's Class I Unmanned Air Vehicle (UAV) that will be fielded to Infantry Brigade Combat Teams starting in 2011. The Class I UAV is currently under evualation by Soldiers of the Army's Evaluation Task Force at Fort Bliss, Texas. The gMAV is man packable and provides a hover and stare capability not currently present in either Army or Air Force UAV inventories.
Its sensor platform can take still and video imagery, which provides key intelligence for precision targeting and surveillance operations

The 56th Stryker Brigade will replace elements of the 2/25th Stryker Brigade who have used the gMAV for reconnaissance and convoy protection operations while deployed to Iraq and participated in extensive gMAV testing and evaluation experiments prior to that

"This fielding is unique as the 56th Stryker Brigade represents the first National Guard Unit to use FCS developed Unmanned Air Vehicles," said Army Major Gregg Dellert, FCS assistant product manager for Micro Air Vehicle and Class I Block Zero Unmanned Air Vehicles.
"The 2/25th Stryker Brigade has been using the gMAV for some time now, but we expect to gain new insight from the fresh user perspective the guard unit will bring"

The gMAV started life as a Defense Advanced Research Projects Agency initiative but battlefield needs, as stressed by a Joint Operational Needs Statement endorsed by the Joint Chiefs of Staff in 2006, helped put the gMAV in the hands of Soldiers deployed to Iraq.
The gMAV has also been successfully used in theater by the Navy as part of a joint task force ordnance explosive disposal unit

Starting in early December, Dellert will train 10 Guardsmen from 56th Stryker Brigade during a course on gMAV fundamentals and field use. Once deployed, these Soldiers will then be responsible for training gMAV operators. The 56th Stryker Brigade will use 15 gMAVs for reconnaissance and other protection operations. Due to their mission, it is expected that the National Guard Soldiers will find different ways to use the gMAV in theater. "In terms of both the future development of the gMAV and the FCS Class I UAV, having a fresh set of eyes will prove very useful.
These National Guard Soldiers will help our FCS developers make sure that future versions of these UAVs will have all capabilities required for robust mission sets"

Future Combat Systems is the cornerstone of the Army's modernization efforts, consisting of a family of new combat vehicles, unmanned aerial and ground systems and unattended ground sensors and munitions all connected by a state-of-the-art network






******************************************************
US agency sees robots replacing humans in service jobs by 2025
******************************************************

Robot workers could 'disrupt unskilled labor markets,' federal report says

Computerworld
By Patrick Thibodeau

http://computerworld. com/action/article. do?command=viewArticleBasic&taxonomyName=privacy&articleId=9121385&taxonomyId=84&intsrc=kc_top

November 24, 2008 (Computerworld) A U.S.
government intelligence agency thinks robots may be so capable by 2025 that questions such as "Would you like fries with that?" may be uttered by a smiling machine at the order counter

In a report titled "Global Trends 2025: A Transformed World" that was released last week, the National Intelligence Council offered its long-range strategic thinking about the military and economic challenges the U.S. will face from other countries over the next 17 years, as well as the environmental challenges ahead.
The report also looks at technologies, and it includes some sweeping ideas about the future

IT workers have long been familiar with the ways that advances in automation can reduce the need for people, especially in data centers.
By 2025, robotics technology will be far enough along to take over low-skill jobs, according to the NIC

That could provide benefits, such as enabling robots to be used to help provide care for the elderly.
But the machines also may be far enough along "to disrupt unskilled labor markets," the NIC said, adding that they could also affect immigration patterns by taking over some jobs now performed by migrant workers

When Corporate Network Safety Starts at Employees' Homes Read this whitepaper
Vendors such as MobileRobots Inc. already are offering products that provide an idea of what the future may look like.
Meanwhile, last year a British artificial intelligence researcher wrote a book predicting that humans will fall in love with and even marry robots by 2050

The NIC's report doesn't address that possibility. But it does say that robotic technologies may be used to augment human capabilities, much like in the 1970s television show The Six Million Dollar Man and its spin-off, The Bionic Woman.
At the extreme end, the report foresees the possible development of an exoskeleton resembling "a wearable humanoid robot, that uses sensors, interfaces, power systems and actuators to monitor and respond to arm and leg movements, providing the wearer with increased strength and control"

What may be more widespread, the NIC said, are "human cognitive augmentation technologies" — wearable devices that can help improve vision, hearing and memory

Separately, the agency also predicted that by 2025, there will be an "Internet of Things" created by the ubiquitous use of radio frequency identification tags on a wide variety of physical items, such as food packages, furniture and paper documents.
Such objects will be able to be located, identified, monitored and remotely controlled via the Internet, according to the report

The vast reservoir of RFID data will be managed on high-performance computers connected via next-generation Internet technologies, the NIC said. It contended that the trend toward increased use of RFID is inevitable and will be driven by the need to improve supply chain operations and logistics as well as energy efficiency and data security.
But privacy concerns will create a big barrier to entry for companies, the agency said

Other predictions in the report include a belief that new kinds of energy storage technologies, such as batteries and fuel cells, will emerge by 2025. In addition, much of the report focuses on how the U.S. will fare in a changing world. "By 2025, the U..S will find itself as one of a number of important actors on the world stage, albeit still the most powerful one," the NIC report said









November 19, 2008 - IBM Seeks to Build the Computer of the Future Based on Insights from the Brain - In an unprecedented undertaking, IBM Research and five leading universities are partnering to create computing systems that are expected to simulate and emulate the brains abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size, Science has come a long way in understanding the bodys central nervous system, but the way our brains work - the fact that we recognize patterns and base our thoughts and ideas on past experiences, for example - remains largely a mystery.
Understanding the process behind these effortless feats of the human brain and creating a computational theory based on it is one of the biggest and most fundamental challenges for computer scientists today, and IBM researchers are one step closer to making this quest a reality


Mimicking synapses like this one is crucial to the effort

*************************************
IBM to build brain-like computers
*************************************

BBC News
Friday, 21 November 2008
By Jason Palmer
Science and technology reporter

IBM has announced it will lead a US government-funded collaboration to make electronic circuits that mimic brains

Part of a field called "cognitive computing", the research will bring together neurobiologists, computer and materials scientists and psychologists

As a first step in its research the project has been granted $4.9m (£3.
27m) from US defence agency Darpa

The resulting technology could be used for large-scale data analysis, decision making or even image recognition

"The mind has an amazing ability to integrate ambiguous information across the senses, and it can effortlessly create the categories of time, space, object, and interrelationship from the sensory data," says Dharmendra Modha, the IBM scientist who is heading the collaboration

"There are no computers that can even remotely approach the remarkable feats the mind performs," he said

"The key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain"

'Perfect storm'

IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons.
The team will then aim to produce for the first time an electronic system that behaves as the simulations do

The longer-term goal is to create a system with the level of complexity of a cat's brain

Prof Modha says that the time is right for such a cross-disciplinary project because three disparate pursuits are coming together in what he calls a "perfect storm"

Neuroscientists working with simple animals have learned much about the inner workings of neurons and the synapses that connect them, resulting in "wiring diagrams" for simple brains

Supercomputing, in turn, can simulate brains up to the complexity of small mammals, using the knowledge from the biological research.
Modha led a team that last year used the BlueGene supercomputer to simulate a mouse's brain, comprising 55m neurons and some half a trillion synapses

"But the real challenge is then to manifest what will be learned from future simulations into real electronic devices - nanotechnology," Prof Modha said

Technology has only recently reached a stage in which structures can be produced that match the density of neurons and synapses from real brains - around 10 billion in each square centimetre

Networking

Researchers have been using bits of computer code called neural networks that seek to represent connections of neurons.
They can be programmed to solve a particular problem - behaviour that appears to be the same as learning

But this approach is fundamentally different

"The issue with neural networks and artificial intelligence is that they seek to engineer limited cognitive functionalities one at a time.
They start with an objective and devise an algorithm to achieve it," Prof Modha says

"We are attempting a 180 degree shift in perspective: seeking an algorithm first, problems second.
We are investigating core micro- and macro-circuits of the brain that can be used for a wide variety of functionalities"

The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons

Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them.
Making a nano-scale material that can fit that description is one of the major goals of the project

"The brain is much less a neural network than a synaptic network," Modha says

First thought

The fundamental shift toward putting the problem-solving before the problem makes the potential applications for such devices practically limitless

Free from the constraints of explicitly programmed function, computers could gather together disparate information, weigh it based on experience, form memory independently and arguably begin to solve problems in a way that has so far been the preserve of what we call "thinking"

"It's an interesting effort, and modelling computers after the human brain is promising," says Christian Keysers, director of the neuroimaging centre at University Medical Centre Groningen.
However, he warns that the funding so far is likely to be inadequate for such an large-scale project

That the effort requires the expertise of such a variety of disciplines means that the project is unprecedented in its scope, and Dr Modha admits that the goals are more than ambitious

"We are going not just for a homerun, but for a homerun with the bases loaded," he says



Yaskawa Electric Worker bot sorts packages
http://www. pinktentacle. com/2007/07/worker-bot-sorts-packages/



Tmsuk T-52 Enryu Rescue Robot - Kinda scary


Japan corpse removal robot

*****************************************************
Robots seen doing work of 3.
5 million in Japan
*****************************************************

Reuters
Tue Apr 8, 2008

TOKYO (Reuters) - Robots could fill the jobs of 3.5 million people in graying Japan by 2025, a thinktank says, helping to avert worker shortages as the country's population shrinks.


Japan faces a 16 percent slide in the size of its workforce by 2030 while the number of elderly will mushroom, the government estimates, raising worries about who will do the work in a country unused to, and unwilling to contemplate, large-scale immigration.


The thinktank, the Machine Industry Memorial Foundation, says robots could help fill the gaps, ranging from microsized capsules that detect lesions to high-tech vacuum cleaners.


Rather than each robot replacing one person, the foundation said in a report that robots could make time for people to focus on more important things.


Japan could save 2.1 trillion yen ($21 billion) of elderly insurance payments in 2025 by using robots that monitor the health of older people, so they don't have to rely on human nursing care, the foundation said in its report.


Caregivers would save more than an hour a day if robots helped look after children, older people and did some housework, it added. Robotic duties could include reading books out loud or helping bathe the elderly.


"Seniors are pushing back their retirement until they are 65 years old, day care centers are being built so that more women can work during the day, and there is a move to increase the quota of foreign laborers. But none of these can beat the shrinking workforce," said Takao Kobayashi, who worked on the study.


"Robots are important because they could help in some ways to alleviate such shortage of the labor force.
"

The current fertility rate is 1.3 babies per woman, far below the level needed to maintain the population, while the government estimates that 40 percent of the population will be over 65 by 2055, raising concerns about who will look after the graying population.


Kobayashi said changes was still needed for robots to make a big impact on the workforce.


"There's the expensive price tag, the functions of the robots still need to improve, and then there are the mindsets of people," he said.


"People need to have the will to use the robots.
"

(Reporting by Yoko Kubota; Editing by Rodney Joyce)









Robots are increasingly taking over more soldier duties in Iraq and Afghanistan, with predictions that as much as 30 percent of the U.S.
Army will be robotic by 2020 WUSTL computer scientists who work on robots say the machines still need the human touch


Foster Millers Talon killer robot - already deployed in Iraq to kill human beings

***********************************
Military use of robots increases
***********************************

Physorg
August 04, 2008

http://www. physorg. com/news137088476. html

Robots are increasingly taking over more soldier duties in Iraq and Afghanistan, with predictions that as much as 30 percent of the U.S. Army will be robotic by 2020.
WUSTL computer scientists who work on robots say the machines still need the human touch Image: WUSTL

War casualties are typically kept behind tightly closed doors, but one company keeps the mangled pieces of its first casualty on display.
This is no ordinary soldier, though—it is Packbot from iRobot Corporation

Robots in the military are no longer the stuff of science fiction. They have left the movie screen and entered the battlefield. Washington University in St. Louis's Doug Few and Bill Smart are on the cutting edge of this new wave of technology. Few and Smart report that the military goal is to have approximately 30% of the army be robotic forces by somewhere around 2020.
Of course, this isn't robotic soldiers from movies like "Star Wars" and "I, Robot"

''When the military says 'robot' they mean everything from self-driving trucks up to what you would think of as a robot. You would more accurately call them autonomous systems rather than robots," says Smart, Ph.D.
, WUSTL assistant professor of computer science and engineering

All of the army's robotic force is teleoperated, meaning that there is someone operating the robot from a remote location, perhaps with a joystick and a computer screen.
While this may seem like a caveat in plans to add robots to the military, it is actually very important to keep humans involved in the robotic operations

Terminators they're not

"It's a chain of command thing. You don't want to give autonomy to a weapons delivery system. You want to have a human hit the button," says Smart. "You don't want the robot to make the wrong decision.
You want to have a human to make all of the important decisions"

And while movies display robots as intelligent beings, Smart and Few aren't necessarily looking for intelligent decision-making in their robots.
Instead, they are working to develop improved, "intelligent" functioning of the robot

"It's oftentimes like the difference between the adverb and noun. You can act intelligently or you can be intelligent.
I'm much more interested in the adverb for my robots," says Few

Few, who is Smart's Ph.D. student, is also interested in the delicate relationship between robot and human. He is working to develop a system in which the robot can carry out a task while keeping a human in the loop and with the ability to create new goals for the robot.
Few says that there are many issues that may require "a graceful intervention" by humans and these need to be thought of from the ground up

"When I envision the future of robots, I always think of the Jetsons," says Few. "George Jetson never sat down at a computer to task Rosie to clean the house. Somehow, they had this local exchange of information.
So what we've been working on his how we can use the local environment rather than a computer as a tasking medium to the robot "

To work toward this goal, Few has incorporated what many would simply consider a toy into robotic programming. Using a Wii controller, Few capitalizes on natural human movements to communicate with the robot. Using something as simple and as common as this video game controller also has added benefits in a military setting.
Rather than carting around a heavy laptop and being forced to focus on a joystick and screen, soldiers in battle can stay alert and engaged in their surroundings while performing operations with the robot

"We forget that when we're controlling robots in the lab it's really pretty safe and no one's trying to kill us," says Smart. "But if you are in a war zone and you're hunched over a laptop, that's not a good place to be.
You want to be able to use your eyes in one place and use your hand to control the robot without tying up all of your attention"

Robots are already finding a place among deployed troops. There are unmanned aerial vehicles and ground robots for explosives detection. Robotics advancements do, however, raise new ethical questions, such as where to place the blame if a robot kills someone. Nevertheless, as the technology progresses, more robots are being sent into battle first.
The mangled Packbot on display at iRobot is just one such example of a fortunate casualty

"When I stood there and looked at that Packbot, I realized that if that robot hadn't been there, it would have been some kid," reflects Few

Source: Washington University in St Louis








iRobot’s Taserbot – Ready For US MARTIAL LAW NEEDS

************************************************************
Packs of robots will hunt down uncooperative humans
************************************************************

New Scientist
October 22, 2008 6:00 PM


The latest request from the Pentagon jars the senses. At least, it did mine.
They are looking for contractors to provide a "Multi-Robot Pursuit System" that will let packs of robots "search for and detect a non-cooperative human"

One thing that really bugs defence chiefs is having their troops diverted from other duties to control robots. So having a pack of them controlled by one person makes logistical sense.
But I'm concerned about where this technology will end up

Given that iRobot last year struck a deal with Taser International to mount stun weapons on its military robots, how long before we see packs of droids hunting down pesky demonstrators with paralysing weapons? Or could the packs even be lethally armed? I asked two experts on automated weapons what they thought - click the continue reading link to read what they said.
Both were concerned that packs of robots would be entrusted with tasks - and weapons - they were not up to handling without making wrong decisions

Read The Rest HERE







Earlier report August 13, 2008 - Robot with a rat brain
http://technology. newscientist. com/article/mg19926696. 100

********************************
Computer circuit builds itself
********************************

Associated Press
15 October 2008

Organic molecules organize themselves to form a bridge between electrodes

A team of European physicists has developed an integrated circuit that can build itself.
The work, appearing in this week's Nature1, is an important step towards its ultimate goal — a self-assembling computer

Today's computer chips are made by etching patterns onto semiconducting wafers using a combination of light and photosensitive chemicals. But the technique is being pushed to the limit as ever more processing power is being packed onto chips, requiring engineers to etch details just a few tens of nanometres across.
So scientists are hunting for alternative ways to assemble even tinier chips

Letting them build themselves is, in many ways, the most obvious solution, says Dago de Leeuw, a researcher at Philips Research Laboratories in Eindhoven, the Netherlands. "The nicest example is DNA," he says.
Our genetic code provides a set of instructions that can be used to marshal molecules into an entire person, and researchers would like to come up with a similar set of compounds able to organize each other into circuits


Read The Rest HERE




The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of new technology for use by the military - THE REAL LIFE SKYNET







DARPA’s iXo Artificial Intelligence Control Grid

This was constructed almost entirely using government / military quotes, animations, videos, images and photos. The narrative is sourced from government quotes from start to finish. It is the “official version”, if you will, but in an unprecedented format. It unveils the governments numerous and ongoing programs related to A.I., “NBIC”, the “Global Information Grid”, nanotechnology, biotechnology, autonomous drones, “naval sea-bases”, space weapons, weather modification… or more directly: domestic and global totalitarian technological domination. American Imperialism meets Artificial Intelligence. The only debate is: what are we going to do to stop it? Time’s running out… It mostly centers around DARPA materials, as they’re the fountainhead of all of this, but this is all a broad multi-agency effort.
Some of the video content, the “OS” of the video, was screen captured from the DARPA sites old iXo interactive flash presentation, from almost a yearago, but is now no longer available

Labels: ,

Google

0 Comments:

Post a Comment

<< Home

eXTReMe Tracker