MIT Technology Review March 25, 2019 Everything you need to know from EmTech Digital 2019, where the sharpest minds in the technology, management, startup, engineering, and academic communities converge. The article covers the following 14 stories: Tech companies must anticipate the looming risks as AI gets creative ; AI researchers must confront “missed opportunities” to achieve social good; Deepfakes are solvable—but don’t forget that “shallowfakes” are already pervasive ; Robots won’t make it into our houses until they get common sense ; How malevolent machine learning could derail AI ; How machine learning is accelerating last-mile, and last-meter, delivery ; Your next car could have […]
Tag Archives: AI
An AI for generating fake news could also help detect it
MIT Technology Review March 12, 2019 To detect fake news researchers at MIT and Harvard based their experiments on the hypothesis that language models produce sentences by predicting the next word in a sequence of text. So, if they can easily predict most of the words in a given passage, it’s likely it was written by one of their own. They tested this idea by building an interactive tool based on OpenAI’s GPT-2 and fed it both machine and human generated text. The tool generally correctly identified the machine generated and human generated texts. When it was fed text generated […]
Superintelligence as a Service is Coming and It Can Be Safe AGI
Next Big Future February 25, 2019 According to a report by the Drexler and the Oxford Future of Humanity Institute in the UK, the concept of comprehensive AI services (CAIS) provides a model of flexible, general intelligence in which agents are a class of service-providing products, rather than a natural or necessary engine of progress in themselves. Responsible development of AI technologies can provide an increasingly comprehensive range of superintelligent-level (SI-level) AI services that can deliver the value of general-purpose AI while avoiding the risks associated with self-modifying AI agents. Tasks for advanced AI include: Modeling human concerns; Interpreting human […]
Best of arXiv.org for AI, Machine Learning, and Deep Learning – January 2019
Inside Big Data February 20, 2019 The articles are academic research papers, typically geared toward graduate students, post docs, and seasoned professionals. Articles are listed in no particular with a brief overview – Hard-Exploration Problems , Deep Neural Network Approximation for Custom Hardware: Where We’ve Been, Where We’re Going , Generating Textual Adversarial Examples for Deep Learning Models: A Survey , Revisiting Self-Supervised Visual Representation Learning , Self-Driving Cars: A Survey read more.
Artificial intelligence controls quantum computers
Science Daily October 25, 2018 Researchers in Germany show how a network-based “agent” can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. To find strategies without human intervention they developed two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, the work more generally demonstrates the promise of neural-network-based reinforcement learning in physics… read more. TECHNICAL ARTICLE
Why AI researchers shouldn’t turn their backs on the military
MIT Technology Review August 14, 2018 According to the author of a recent book, Army of None: Autonomous Weapons and the Future of War, AI researchers must be a part of thes conversations, as their technical expertise is vital to shaping policy choices. We need to take into account AI bias, transparency, explainability, safety, and other concerns. AI technology has these twin features today—it’s powerful but also has many vulnerabilities, much like computers and cyber risks. Unfortunately, governments seem to have gotten the first part of that message (AI is powerful) but not the second (it comes with risks). AI […]
Researchers move closer to completely optical artificial neural network
Eurekalert July 19, 2018 There is interest in using integrated optics as a hardware platform for implementing artificial neural networks. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. Researchers at Stanford University have developed a method that enables highly efficient, in situ training of a photonic neural network by using adjoint variable methods to derive the photonic analogue of the backpropagation algorithm. As demonstration they trained a numerically simulated photonic artificial neural network. The method may be of broad interest to experimental sensitivity analysis of photonic systems and optimization of […]
Institute launches the MIT Intelligence Quest
MIT News February 1, 2018 MIT has launched the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society. The Intelligence Quest will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. To power the Quest and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving… read more.
The US military is funding an effort to catch deepfakes and other AI trickery
MIT News May 23, 2018 This summer, under a project funded DARPA, the world’s leading digital forensics experts will gather for an AI fakery contest. They will compete to generate the most convincing AI-generated fake video, imagery, and audio—and they will also try to develop tools that can catch these counterfeits automatically. The contest will include so-called “deepfakes,” videos in which one person’s face is stitched onto another person’s body… read more.
Artificial intelligence needs to be socially responsible, says new policy report
Eurekalert May 10, 2018 In a policy report “On AI and Robotics: Developing policy for the Fourth Industrial Revolution“, researchers in the UK contend that the development of new Artificial Intelligence technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible. In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people…read more.