Collective Consciousness vs. AI

ALTERNATIVE NEWS

Elon Musk And Over 2400 AI Scientists Sign Pledge Against Killer Robots

IN BRIEF

  • The Facts:More than 2,400 AI scientists and researchers have signed a pledge which intends to deter military firms and nations from building lethal autonomous weapon systems (LAWS).
  • Reflect On:Are we as individuals, working together on shared concerns for all humans on the planet, starting to see the power we have to effect positive change?

It is not an uncommon story in the history of science and technology: the most brilliant and innovative minds of their time discover, create, and invent technologies that can have hugely positive benefits for mankind as a whole. Inevitably, the largest and wealthiest ‘consumer’ of such technologies is the Military-Industrial Complex, and the main ways these technologies are produced in our world are as tools of control, warfare, and human suffering.

In earlier times scientists and inventors didn’t have much say in how their work was used, and could often be persuaded that their use in military applications was actually for the benefit of humankind. Today, those naive days are gone, and the landscape is different. Some of the most prominent minds that are creating advanced technologies are starting to speak out more and more about how their work is being used in the world.

More Than 2400 Signatories

Elon Musk of SpaceX and Demis Hassabis at Google DeepMind are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as LAWS. The signatories are scientists who specialize in artificial intelligence, (AI) and have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight. The pledge was created by the Future of Life Institute:

LETHAL AUTONOMOUS WEAPONS PLEDGE

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

The Power Of Public Shaming

The pledge hopes to amount to more than just words. In calling on countries to legislate laws, technology companies to not accept contracts, and individuals to voice their support against lethal autonomous weapons, they hope to sway public opinion overwhelmingly against LAWS, and in doing so shame any person or group who would go ahead with its development. There is some precedent for this approach working, according to Yoshua Bengio, an AI pioneer at the Montreal Institute for Learning Algorithms:

This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines. American companies have stopped building landmines

The timing of this pledge is crucial. The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge, had this to say about it:

We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop.

We cannot stop a determined person from building autonomous weapons, just as we cannot stop a determined person from building a chemical weapon. But if we don’t want rogue states or terrorists to have easy access to autonomous weapons, we must ensure they are not sold openly by arms companies.

Collective Consciousness Rising

This pledge is but one example of how people are implicating themselves in the future of the planet. No longer are we waiting on the sidelines and leaving decisions up to corporations, the military, or our political leaders. When we identify ourselves not as a race, culture or nation but as a planet, where all of humankind is considered part of the family, then we get to have greater access to the power of our collective consciousness. Once we harness that power, no initiative that is for the benefit of humanity is beyond our abilities.

from:    https://www.collective-evolution.com/2018/07/22/elon-musk-and-over-2400-ai-scientists-sign-pledge-against-killer-robots/

A Chip in the Old Block?

Remember when credit and debit cards went from smooth plastic to microchipped? That could be you in a few years, as multiple corporations are pushing to microchip the human race. In fact, microchip implants in humans are already on the market, and an American company called Applied Digital Solutions (ADS) has developed one approximately the size of a grain of rice which has already been approved by the U.S. Food and Drug Administration for distribution and implementation. Here is a video taken three years ago of DARPA Director and Google Executive Regina Dugan promoting the idea of microchipping humans.

Elon Musk has now weighed in on the debate, referring to microchips as a ‘neural lace,’ and arguing that they will be “the thing that really matters for humanity to achieve symbiosis with machines.” Musk has expressed his concern that artificial intelligence might one day take over humanity and possibly treat us like a second class species. He believes the only way to prevent this type of future is this neural lace.

The neural lace would be a type of brain implant that would grow into the brain and allow neurons to be programmed, strengthened, and even enhanced, essentially augmenting human intelligence.

Below is a clip of Musk talking about artificial intelligence and the neural lace, taken from the Code Conference.

What Would Be The Real Purpose Of Human Implants? What Would You Do?

It’s hard not to see ulterior motives in anything corporations do. We have been lied to so often and our interests neglected so many times times that it’s difficult to trust anyone these days, and considering the recent Edward Snowden leaks, it seems only natural to question whether chip implants could be part of some mass surveillance agenda, or some other type of intelligence motive that the general public will be kept in the dark about. It’s important to not label these ideas as mere conspiracy theories, because then we shut ourselves off from the truth. We stop thinking about ideas and considering new information because we’ve already labelled them. I am not at all suggesting that Elon Musk is part of some possible agenda we are not being told of, but, it’s important to remember that a possible trillions of dollars are being put into operational black budget programs, and we don’t even know what they are. But as we’ve seen so far, these programs involve the use of human beings, so it’s important we know about them.

If it came down to it, and we were required to have chip implants to replace our social security IDs, licenses, credit cards, etc., and we were unable to purchase a home, buy groceries, and otherwise participate in modern life, would you allow yourself to be chipped? What would you do? Why or why not?

from:    http://www.collective-evolution.com/2016/06/10/why-elon-musk-is-advocating-for-brain-chipping-the-human-race/

WOULD YOU BE OKAY WITH THIS?????

Musk & Hawking On Dangers of AI

Don’t let AI take our jobs (or kill us): Stephen Hawking and Elon Musk sign open letter warning of a robot uprising

  • Letter says there is a ‘broad consensus’ that AI is making good progress
  • Areas benefiting from AI research include driverless cars and robot motion
  • But in the short term, it warns AI may put millions of people out of work
  • In the long term, robots could become far more intelligent than humans
  • Elon Musk has previously linked the development of autonomous, thinking machines to ‘summoning the demon’

Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.

Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.

The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

A group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking (pictured), have signed an open letter promising to ensure AI research benefits humanity.

The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.

The authors say there is a ‘broad consensus’ that AI research is making good progress and would have a growing impact on society.

It highlights speech recognition, image analysis, driverless cars, translation and robot motion as having benefited from the research.

‘The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,’ the authors write.

Elon Musk previously linked the development of autonomous, thinking machines, to 'summoning the demon'

Elon Musk previously linked the development of autonomous, thinking machines, to ‘summoning the demon’

But it issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak.

For instance, in the short term, it claims AI may put millions of people out of work.

In the long term, it could have the potential to play out like a fictional dystopias in which intelligence greater than humans could begin acting against their programming.

‘Our AI systems must do what we want them to do,’ the letter says.

‘Many economists and computer scientists agree that there is valuable research to be done on how to maximise the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment.’

Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute and Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate.

The letter comes just weeks after Professor Hawking warned that AI could someday overtake humans.

Space X Founder Elon Musk: AI is our ‘biggest existential threat’

GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS

Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is ‘number one risk for this century,’ and believes it could play a part in human extinction

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.’

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.

Speaking at event in London, the physicist told the BBC: ‘The development of full artificial intelligence could spell the end of the human race.’

This echoes claims he made earlier in the year when he said success in creating AI ‘would be the biggest event in human history, [but] unfortunately, it might also be the last.’

In November, Elon Musk, the entrepreneur behind Space-X and Tesla, warned that the risk of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence, could be in as few as five years.

He has previously linked the development of autonomous, thinking machines, to ‘summoning the demon’.

Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October, Musk described artificial intelligence as our ‘biggest existential threat’.

He said: ‘I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

‘I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

‘With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.’

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak

The letter issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak

Musk’s Newer Faster Travel System

Hyperloop Design To Be Released To The Public On Monday

August 12, 2013
3
5
0

redOrbit Staff & Wire Reports – Your Universe Online

SpaceX founder and Tesla Motors co-founder Elon Musk’s plans for a pneumatic travel system that moves faster than the speed of sound will be published on Monday, with the billionaire revealing he has no plans to patent the design and will allow other researchers to study, modify or develop it on an “open source” basis.

The system is known as the “Hyperloop,” and according to Nick Allen of The Telegraph, it is not the same as the “vactrain” (vacuum-tube train) concept currently being developed by a firm in Colorado. While few details were revealed over the weekend, Musk confirmed the system would utilize tubes, but not vacuum tubes, and would be low-friction, but not completely frictionless.

In addition, in recent weeks the 42-year-old PayPal co-founder said a design created by Canadian technology enthusiast John Gardi – which featured a tunnel that was nine feet in diameter, raised off the ground on pylons, and formed a complete loop between two different locations – was “the closest I’ve seen anyone guess so far,” Allen said.

Gardi’s design also utilized giant turbines that would fill the tube with a stream of air, while two-meter wide pods filled with people would be transported by an electromagnetic projectile launcher known as a rail gun, the Telegraph reporter said. When the pod neared the end of its journey, it would be shifted out of the air stream, and it would then have its velocity slowed by a magnetic braking system.

Musk first mentioned the Hyperloop back in May at D11, a conference put on by technology website All Things D. At the time, he described the concept as an alternative to California’s proposed high-speed rail project, which he said would be “the slowest bullet train in the world at the highest cost per mile.” According to VentureBeat’s Dylan Tweney, Musk also went on to call the Hyperloop “a cross between a Concorde, a railgun, and an air hockey table.”

On Wednesday, during a quarterly earnings call for his electric car company Tesla, Musk admitted he believed he might have “shot myself in the foot” by bringing up the Hyperloop concept, noting he was “too strung out” to undertake the project himself saying he was too busy with Tesla, Tweney’s colleague Meghan Kelly reports. Musk did not rule out getting involved in the project and offering a helping hand, but he said he hoped to find someone else to actually build the transport system.

Largely due to his issues with California’s high-speed rail project, Musk aspires to have the first Hyperloop built in that state, connecting Los Angeles and San Francisco, Allen said. Musk believes his transport project could be built for just 10 percent the cost of the proposed bullet train, and could allow passengers to move between the two cities in just 30 minutes instead of the anticipated three-hour travel time associated with the rail project.

“The bullet train is currently estimated to be costing $68 billion and may not be completed until 2028,” Allen said. “It would reach top speeds of only around 130mph. In a survey seven in 10 people said, if the train ever does run, they would ‘never or hardly ever’ use it anyway.”

from:    http://www.redorbit.com/news/space/1112920735/hyperloop-plans-to-be-unveiled-081213/