Thursday, June 9, 2016

Friday Thinking 10 June 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Contents
Quotes

Articles

The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots



We do believe the traditional ownership model is being disrupted...We’re going to see more change in the next five to ten years than we’ve seen in the last 50.
MARY BARRA, GM CEO

You could say there would be less vehicles sold, but we’re changing our business model to look at this as vehicle miles traveled...I could argue that with autonomous vehicles, the actual mileage on those vehicles will accumulate a lot more than a personally owned vehicle.
MARK FIELDS, FORD CEO

Internet trends report | Mary Meeker



There’s a common misconception is that the government issues our currency. It doesn’t. Printing bills is completely different from issuing them into circulation as currency, not to mention that over 97% of our money supply is electronic bits in bank accounts, not paper bills. The Federal Reserve actually buys the bills for pennies on the dollar, as if they were paying for printing services, because basically they are. Then banks (including the Fed) issue dollars into circulation.

National currencies aren’t as Centralized,& Bitcoin isn’t as Decentralized, as you think




Imagine your city as it might be in the not-so-distant future.
Transportation in this city is various, pleasant, and low-impact. There are safe and efficient bike lanes, and anyone can order a cheap ride from an autonomous, minimal-emissions vehicle. Because fewer people drive, and almost no one idles in traffic, air quality is high. There are plenty of parks and open spaces because cars are less prevalent. Life in your city is happy, healthy, and sustainable. Your city is, above all, a smart city.

The smart city, like the smart home, is built on and around the "Internet of things," in which networked products gather, store, and share user data while communicating with one another in order to create improved and highly-efficient living environments. In a smart city, the Internet of things expands outward from the home into a plethora of automated and interconnected urban devices. The communication between and among these devices allows for vast amounts of municipal data to be gathered and eventually analyzed. A smart city leverages its collection of massive data to learn about its residents, showcasing the ways in which smart cities are beginning to transcend the Internet of things, by gathering massive data sets that are gradually helping researchers understand vast and complex networks.

Urban planning tools synthesize & collect data to improve quality of city life



Twenty years ago if you had access to a large information center, such as the Library of Congress, and someone asked you a series of questions, your task would have been to pour through the racks of books to come up with the answers. The time involved could have easily added up to 10 hours per question.

Today, if we are faced with uncovering answers from a digital Library of Congress, using keyboards and computer screens, the time-to-answer process has been reduced to as little as 10 minutes.

The next iteration of interface design will give us the power to find answers in as little as 10 seconds. That’s where neural lace technology comes into play.

The ease and fluidity of our information-to-brain interface will have a profound effect on everything from education, to the way we conduct business, to the way we function as a society.

After we achieve a 10-second interface, we’ll immediately set our sights on the next milestone, the 10-millisecond interface.

Creating the World’s First Neural Lace Network



Instead of continuing to develop new materials the old-fashioned way — stumbling across them by luck, then painstakingly measuring their properties in the laboratory — Marzari and like-minded researchers are using computer modelling and machine-learning techniques to generate libraries of candidate materials by the tens of thousands. Even data from failed experiments can provide useful input. Many of these candidates are completely hypothetical, but engineers are already beginning to shortlist those that are worth synthesizing and testing for specific applications by searching through their predicted properties — for example, how well they will work as a conductor or an insulator, whether they will act as a magnet, and how much heat and pressure they can withstand.

Can artificial intelligence create the next wonder material?



India put a satellite around Mars in 2014 for just $72 million. By comparison, the budget for the film Gravity was closer to $100 million -- so a real Indian space mission cost less than a fake American one.

One of the reasons that India has been able to punch well above its financial weight in the space race is due to its considered approach. Rather than practical experiments, the country spends plenty of time scouring data from other countries space missions. That way, it can identify errors that entities like NASA and Roscosmos made previously and find shortcuts around them.

India successfully tests first tiny reusable space shuttle




The most salient cost facing organizations today are opportunity costs - the cost of generating and/or seizing opportunities - a rigid focus on scaling efficiency may have the unintended consequence of both increasing costs and impeding innovation and apprehending opportunity - by the time the system is efficient - the acceleration of change has put the organization behind the curve. A comprehensive approach for generating and seizing opportunity and enabling innovation.
Yet many institutions and companies remain unaware of this radical shift. They often confuse invention and innovation. Invention is the creation of a technology or method. Innovation concerns the use of that technology or method to create value. The agile approaches needed for open innovation 2.0 conflict with the 'command and control' organizations of the industrial age (see ‘How innovation modes have evolved’). Institutional or societal cultures can inhibit user and citizen involvement. Intellectual-property (IP) models may inhibit collaboration. Government funders can stifle the emergence of ideas by requiring that detailed descriptions of proposed work are specified before research can begin. Measures of success, such as citations, discount innovation and impact. Policymaking lags behind the market place.

The challenge is how to execute and govern the new mode. Innovation is a risky business that has high failure rates — 96% of all innovations do not return their capital cost, and 66% of new products fail within two years. But the potential benefits are vast. Innovation policies should recognize that the linear research-and-development model will be outpaced by a nonlinear, open and collaborative innovation process where the mantra is 'fail fast, scale fast'.

Twelve principles for open innovation 2.0

Evolve governance structures, practices and metrics to accelerate innovation in an era of digital connectivity, writes Martin Curley.
A new mode of innovation is emerging that blurs the lines between universities, industry, governments and communities. It exploits disruptive technologies — such as cloud computing, the Internet of Things and big data — to solve societal challenges sustainably and profitably, and more quickly and ably than before. It is called open innovation 2.0.

The promise is sustainable, intelligent living: innovations drive economic growth and improve quality of life while reducing environmental impact and resource use. For example, a dynamic congestion-charging system can adjust traffic flow and offer incentives to use park-and-ride schemes, guided by real-time traffic levels and air quality. Car-to-car communication could manage traffic to minimize transit times and emissions and eliminate road deaths from collisions. Smart electricity grids lower costs, integrate renewable energies and balance loads. Health-care monitoring enables early interventions, improving life quality and reducing care costs.

Companies are opening up their research labs. Philips has converted its research facility in Eindhoven, which had 2,400 employees in 2001, to an open research campus (High Tech Campus Eindhoven) that now houses more than 140 firms and around 10,000 researchers. Breakthrough ideas often emerge at the intersection of disciplines.


At MIT researchers have been exploring the potential of the blockchain technology - here’s an article concern the application of this technology to educational credentials - this is definitely a space to watch. In particular some of the discussion is very helpful in understanding the challenges and potential of blockchain in re-imagining our institutions and enabling institutional innovations.
Working on this project, we have not only learned a lot about the blockchain, but also about the way that technology can shape socioeconomic practices around the concept of credentials. We hope that sharing some of the things we have grappled with and the decisions we made (and why) will be useful for other developers and institutions interested in developing digital credential systems that make use of blockchain architectures.

When certification systems are not working well, the consequences can be more than just inefficient, such as the cumbersome and expensive process of requesting a university transcript: they can be disastrous, such as when a refugee is unable to provide a certificate of completed study, and is therefore prevented from continuing her education. Digital systems could help in both of these situations.

What we learned from designing an academic certificates system on the blockchain

Over the past year, we have been working on a set of tools to issue, display, and verify digital credentials using the Bitcoin blockchain and the Mozilla Open Badges specification. Today we are releasing version 1 of our code under the MIT open-source license to make it easier for others to start experimenting with similar ideas. In addition to opening up the code, we also want to share some of our thinking behind the design, as well as some of the interesting questions about managing digital reputations that we plan to continue working on.

You can find links to our source code, documentation, and discussion on our project homepage:

The overall design of the certification architecture is fairly simple. A certificate issuer signs a well-structured digital certificate and stores its hash within a blockchain transaction. A transaction output is assigned to the recipient.

Working on this project, we have not only learned a lot about the blockchain, but also about the way that technology can shape socioeconomic practices around the concept of credentials. We hope that sharing some of the things we have grappled with and the decisions we made (and why) will be useful for other developers and institutions interested in developing digital credential systems that make use of blockchain architectures.


This is a 5 min read - an article about currencies and the difference between the way the current banking systems keeps its ‘ledgers’ and how the blockchain can disrupt this traditional approach to keeping account-transaction ledgers.

National currencies aren’t as Centralized, and Bitcoin isn’t as Decentralized, as you think

The Surprising Way Dollars Actually Work
National currencies are not what we think they are. We think that all dollars are the same, but they’re not. As a currency designer, I can tell you that a currency follows one clear set of rules. Many different sets of rules, means many different currencies.

The dollars that are in your wallet are not the same as dollars in your bank account. Maybe this is obvious to you, but most people just don’t think about it. Dollars in your bank account can be transacted by writing a check, or electronic transfers. They can earn interest and have an effect on your credit rating. However, dollars in your wallet do none of those things. Instead, the transaction of dollars from your wallet leaves no trace, unless someone in the transaction wants to generate their own record. They follow different rules.

These are examples of two separate currencies that only seem the same because they are valued the same and exchangeable with each other. You can bring cash dollars to the bank and deposit them to your account. Or you can withdraw dollars from your account and walk out with cash. This makes us think they’re one currency when in fact they’re different currencies, with one name, that are designed to be so easy to exchange with each other that the seem like a single currency.

But the full picture is much stranger than that, because different banks have different rules from each other. Each bank is really creating its own currency for their own accounts. They used to even print their own paper bills. But now all the banks agree to exchange that currency on par with cash dollars, so all these diverse currencies appear to be a single unified one.


This is a significant development - although not ready for primetime - it heralds the future of search as we become ever more dependent on visualizations of data. This approach is already moving fast in the world of face and image recognition. From a knowledge management perspective we should all know that images convey more than words.
But their most remarkable discovery is that the most successful papers tend to have more figures. By plotting the number of diagrams in a paper against its impact, the team concludes that high impact ideas tend to be conveyed visually.
...That is interesting work that provides the foundation for an entirely new kind of science. The team calls this “viziometrics,” the science of visual information. This mirrors bibliometrics, which is the statistical study of publications, and scientometrics which is the study of measuring science.

The First Visual Search Engine for Scientific Diagrams

A machine-vision algorithm has learned to analyze and categorize scientific figures.
In 1973, the statistician Francis Anscombe devised a fascinating demonstration showing why data should always be plotted before it is analyzed. The demonstration consisted of four data sets that had almost identical statistical properties. By this measure they are essentially the same.

But when plotted, the data sets look entirely different. Anscombe’s quartet, as it has become known, shows how good graphics allow people to analyze data in a different way, to think and talk about it on another level.  

Most scientists recognize the importance of good graphics for communicating complex ideas. It’s hard to describe the structure of DNA, for example, without a diagram.

And yet, there is little if any evidence showing that good graphics are an important part of the scientific endeavor. The significance of good graphics may seem self-evident, but without evidence, it is merely a hypothesis.

Today, that changes thanks to the work of Po-shen Lee at pals at the University of Washington in Seattle who have used a machine-vision algorithm to search for graphics in scientific papers and then analyze and classify them. This work reveals for the first time that graphics play an important role in the scientific process. “We find a significant correlation between scientific impact and the use of visual information, where higher impact papers tend to include more diagrams, and to a lesser extent more plots and photographs,” they say.
Their site is here:

About Viziometrics
In science, we formally communicate through published papers, books and conferences proceedings. These published works summarize results in many forms: text, citations, figures, schematics, visualizations, equations, etc. Most of the research in scientometrics/bibliometrics -- the quantitative study of the academic literature -- has been done on the text and citations. In this project, we focus our efforts on the figures and images. These information dense objects are largely ignored in scientometrics yet they are many times the cornerstone of a paper. If you have ever picked up a cell biology paper, you will often see the entire paper summarized in one schematic. We want to include them both in our analysis of scholarly communication but also in improving information retrieval in science. These objects have been ignored primarily because they are big, unwieldy and hard to analyze. New, scalable approaches to machine learning provide a new toolset for doing this kind of research.


We’re seeing AI and machine learning applied to an increasingly large range of human knowledge domains - here’s another domain that many think the core of human ability - creative excellence. However - the reality of the situation is that the future is not a race against the machines - it’s a race with them - this may be better understood as another means of enhancing creative capacity together.
We don’t know what artists and musicians will do with these new tools, but we’re excited to find out. Look at the history of creative tools. Daguerre and later Eastman didn’t imagine what Annie Liebovitz or Richard Avedon would accomplish in photography. Surely Rickenbacker and Gibson didn’t have Jimi Hendrix or St. Vincent in mind. We believe that the models that have worked so well in speech recognition, translation and image annotation will seed an exciting new crop of tools for art and music creation.

Welcome to Magenta!

We’re happy to announce Magenta, a project from the Google Brain team that asks: Can we use machine learning to create compelling art and music? If so, how? If not, why not? We’ll use TensorFlow, and we’ll release our models and tools in open source on our GitHub. We’ll also post demos, tutorial blog postings and technical papers. Soon we’ll begin accepting code contributions from the community at large. If you’d like to keep up on Magenta as it grows, you can follow us on our GitHub and join our discussion group.

What is Magenta?
Magenta has two goals. First, it’s a research project to advance the state of the art in machine intelligence for music and art generation. Machine learning has already been used extensively to understand content, as in speech recognition or translation. With Magenta, we want to explore the other side—developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own.

Second, Magenta is an attempt to build a community of artists, coders and machine learning researchers. The core Magenta team will build open-source infrastructure around TensorFlow for making art and music. We’ll start with audio and video support, tools for working with formats like MIDI, and platforms that help artists connect to machine learning models. For example, we want to make it super simple to play music along with a Magenta performance model.


Another signal in the Moore’s Law is Dead - Long Live Moore’s Law file.

Google team predicts quantum computing supremacy over classical computing around 2018 with a 40 qubit universal quantum computer

Google is to trying to combine the Adiabatic Quantum computing AQC method with the digital approach’s error-correction capabilities.

The Google team uses a row of nine solid-state qubits, fashioned from cross-shaped films of aluminium about 400 micrometres from tip to tip. These are deposited onto a sapphire surface. The researchers cool the aluminium to 0.02 degrees kelvin, turning the metal into a superconductor with no electrical resistance. Information can then be encoded into the qubits in their superconducting state.

The interactions between neighboring qubits are controlled by ‘logic gates’ that steer the qubits digitally into a state that encodes the solution to a problem. As a demonstration, the researchers instructed their array to simulate a row of magnetic atoms with coupled spin states — a problem thoroughly explored in condensed-matter physics. They could then look at the qubits to determine the lowest-energy collective state of the spins that the atoms represented.

This is a fairly simple problem for a classical computer to solve. But the new Google device can also handle so-called ‘non-stoquastic’ problems, which classical computers cannot. These include simulations of the interactions between many electrons, which are needed for accurate computer simulations in chemistry. The ability to simulate molecules and materials at the quantum level could be one of the most valuable applications of quantum computing.

This new approach should enable a computer with quantum error correction, says Lidar.


This is fascinating - neural dust for bugs now - but by a decade or two - new forms of cognition - maybe more. A significant video is also included.

Neural Dust - ultra small brain interfaces - is being used to make cyborg insects

As the computation and communication circuits we build radically miniaturize (i.e. become so low power that 1 picoJoule is sufficient to bang out a bit of information over a wireless transceiver; become so small that 500 square microns of thinned CMOS can hold a reasonable sensor front-end and digital engine), the barrier to introducing these types of interfaces into organisms will get pretty low. Put another way, the rapid pace of computation and communication miniaturization is swiftly blurring the line between the technological base that created us and the technological based we’ve created. Michel Maharbiz, University of California, Berkeley, is giving an overview (june 16, 2016) of recent work in his lab that touches on this concern. Most of the talk will cover their ongoing exploration of the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems.; recent results with neural interfaces and extreme miniaturization directions will be discussed. If time permits, he will show recent results building extremely small neural interfaces they call “neural dust,” work done in collaboration with the Carmena, Alon and Rabaey labs.

Radical miniaturization has created the ability to introduce a synthetic neural interface into a complex, multicellular organism, as exemplified by the creation of a “cyborg insect.”

“The rapid pace of computation and communication miniaturization is swiftly blurring the line between technological base we’ve created and the technological base that created us,” explained Dr. Maharbiz. “These combined trends of extreme miniaturization and advanced neural interfaces have enabled us to explore the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems.”


If neural dust seems to much like science fiction - here’s another approach.
For a little background, science fiction author Iain M. Banks first coined the term “neural lace” in The Culture series. In these novels, people living on another planet installed genetically engineered glands in their brains capable of secreting stimulants, psychedelics and sedatives whenever they wanted them.

Creating the World’s First Neural Lace Network

Last year, researchers from Harvard and the National Center for Nanoscience and Technology in Beijing managed to create a working neural lace prototype. They figured out a way to inject a tiny electronic mesh sensor into the brain of a mouse that fully integrates with cerebral matter that enabled computers to monitor brain activity.

Using a syringe, the mesh was injected into the mouse brain where the material expanded to 30 times its original size. Once inside, the mouse brain cells grew around the mesh, forming connections with the wires in the flexible mesh circuit. Unlike most implants, the mouse brain completely accepted the mechanical component and assimilated with it without any damage being caused to the mouse.

To show how this type of technology could be applied to humans, we currently use electric shock treatment for patients suffering from severe muscle spasms. While this approach is only used in worst-case scenarios, it uses long wires that are inserted deep into the brain, risking long-term brain injury with every insertion.

If a neural lace is able to completely integrate with the human brain, this would enable doctors to treat all sorts of neurodegenerative diseases that are currently difficult to cure. But that is only a small piece of a much bigger opportunity here.


Anyone who knows me maybe familiar with the title to this article.
One of the interesting things about living long lives is that you have more stages, and more stages mean more transitions. When lockstep goes, you’re making the transition often on your own. But transitions are also to do with networks. In fact, it turns out that networks are really important for long lives. All sorts of different types of networks. But the network that helps you transition is what sociologists would call “weak ties”: people who are different from you, whom you don’t know very well.

Are you ready to live to 100?

London Business School professor Lynda Gratton believes living longer requires individuals and corporations to change their approach to careers, life transitions, and retirement.
Imagine being 15 years from the traditional retirement age and deciding to start your own company. Or deciding to enter a completely different industry—at age 60. Or spending your twenties deferring further study to find out what really drives you. All of these are options in this age of longevity, according to London Business School professor of management practice Lynda Gratton, who argues that the trajectory of our lives—professionally and personally—remains trapped in a mind-set that applied when life spans were much shorter. In this interview with McKinsey’s Rik Kirkland, Gratton draws on her new book, The 100-Year Life: Living and Working in an Age of Longevity, to explain why lives are moving from two stages to three and what that means not only for individuals but for corporations and government as well. An edited transcript of their conversation follows.


Transforming sun shine into energy isn’t new and has heralded the world into the end game of fossil fuels - and helping us to address climate change - but here’s another trajectory involved with the domestication of DNA that also enable carbon capture and transforms manufacturing. Although this is not ready for primetime - there’s lots of potential to scale this up.
And there is no reason to think that the R. eutropha could not be made to generate other products—perhaps complex hydrocarbon molecules like those found in fossil fuels or even the whole range of chemicals currently synthesized from polluting resources, such as fertilizers. "You have bugs that eat hydrogen as their only food source, and the hydrogen came from solar energy water splitting. So you have renewable bugs and the synthetic biology to make them do anything," Nocera says. "You can start thinking about a renewable chemicals industry." The hybrid team reports in the Science paper that they have already induced R. eutropha to make a molecule that can ultimately be transformed into plastics.
"This science you can do in your backyard. You don't need a multi-billion dollar massive infrastructure," Nocera says.
"By integrating the technology of biology and organic chemistry there is a very powerful path forward where you take the best of both worlds," he adds. "I took air plus sunlight plus water and I made stuff out of it, and I did it 10 times better than nature. That makes me feel good."

Bionic Leaf Makes Fuel from Sunlight, Water and Air

A new device that combines chemistry and synthetic biology could prove key to renewable fuels and even chemicals—and combating climate change
The device uses solar electricity from a photovoltaic panel to power the chemistry that splits water into oxygen and hydrogen, then adds pre-starved microbes to feed on the hydrogen and convert CO2 in the air into alcohol fuels.

A tree's leaf, a blade of grass, a single algal cell: all make fuel from the simple combination of water, sunlight and carbon dioxide through the miracle of photosynthesis. Now scientists say they have replicated—and improved—that trick by combining chemistry and biology in a "bionic" leaf.

Chemist Daniel Nocera of Harvard University and his team joined forces with synthetic biologist Pamela Silver of Harvard Medical School and her team to craft a kind of living battery, which they call a bionic leaf for its melding of biology and technology. The device uses solar electricity from a photovoltaic panel to power the chemistry that splits water into oxygen and hydrogen, then adds pre-starved microbes to feed on the hydrogen and convert CO2 in the air into alcohol fuels. The team’s first artificial photosynthesis device appeared in 2015—pumping out 216 milligrams of alcohol fuel per liter of water—but the nickel-molybdenum-zinc catalyst that made its water-splitting chemistry possible had the unfortunate side effect of poisoning the microbes.

So the team set out in search of a better catalyst, one that would play well with living organisms while effectively splitting water. As the team reports in Science on June 2, they found it in an alloy of cobalt and phosphorus, an amalgam already in use as an anticorrosion coating for plastic and metal parts found in everything from faucets to circuit boards. With a little charge, this new catalyst can assemble itself out of a solution of regular water, cobalt and phosphate—and phosphate in water actually is good for living things like the Ralstonia eutropha bacteria that make up the back half of the bionic leaf. Run an electric current from a photovoltaic device through this solution at a high enough voltage and it splits water. That voltage is also higher than what is needed to induce the cobalt to precipitate out of the solution and form the cobalt phosphide catalyst, which means when the bionic leaf is running there are always enough electrons around to induce the catalyst's formation—and therefore no excess metal left to poison the microbes or bring the bionic leaf's water-splitting to a halt. "The catalyst can never die as it's functioning," Nocera says, noting that the new artificial leaf has been able to run for up to 16 days at a stretch.


Here’s something in the works and ready to start production this year.
“Our aeroponic system is a closed loop system, using 95% less water than field farming, 40% less than hydroponics, and zero pesticides.”

World’s Biggest Indoor Vertical Farm Near NYC to Use 95% Less Water

AeroFarms is on track to produce 2 million pounds of food per year in its 70,000-square-foot facility in Newark, under construction less than an hour outside of Manhattan. Their efficient operation, based on previous experience at similar but smaller facilities, can accomplish this astonishing output “while using 95% less water than field farmed-food and with yields 75 times higher per square foot annually.”

This new facility is comparable in efficiency to what is currently the world’s largest vertical farm in Japan, but nearly three times the size. Staggering its crops is part of the success behind AeroFarm’s strategy at their new and existing locations – at a given facility they are able to switch between 22 crops per year. Their all-season growth works with specialized LED lights and climate controls all without the need for sunlight or soil.


The emergence of new agricultural capacity is hard to imagine if we only consider today’s conditions for cultivation of food. But it’s the power of this participatory engagement with AI that multiplies the opportunities within the human condition.
But this paragraph - reveals why the market systems will become obsolete - because what makes a market system effective is that by implementing a price mechanism - it enabled the most efficient way to allocate scarce resources to where they would produce the most value - that system is now obsolete!!!
So a startup called Harvesting is analyzing satellite data on a vast scale with machine learning, with the idea to help institutions distribute money more efficiently. “Our hope is that in using this technology we would be able to segregate such farmers and villages and have banks or governments move dollars to the right set of people,” says Harvesting CEO Ruchit Garg. While a human analyst can handle 10, maybe 15 variables at a time, Garg says, machine learning algorithms can handle 2,000 or more. That’s some serious context.

The Future of Humanity’s Food Supply Is in the Hands of AI

HUMANITY’S GOT ITSELF a problem. As Homo sapiens balloons as a species—to perhaps nearly 10 billion by 2050—the planet stubbornly stays the same size, meaning the same amount of land must support way, way more people. Add the volatility of global warming and consequent water shortages, and the human race is going to have some serious trouble feeding itself.

Perhaps it’s serendipitous, then, that the machines have finally arrived. Truly smart, truly impressive robots and machine learning algorithms that may help usher in a new Green Revolution to keep humans fed on an increasingly mercurial planet. Think satellites that automatically detect drought patterns, tractors that eyeball plants and kill the sick ones, and an AI-powered smartphone app that can tell a farmer what disease has crippled their crop.

Forget scarecrows. The future of agriculture is in the hands of the machines.


The phase transition into a new energy geo-politics may be accelerating - maybe it’s time to re-imagine 2020? A 3min video explains the article.

Renewable energy now supplies almost a quarter of the world's power needs

Last year was an absolutely huge 12 months for renewable energy, with a new global status report on clean energy highlighting how 2015 was a record year for the industry – including the revelation that renewable energy can now satisfy nearly a quarter of the world's power demands.

According to energy policy network REN21, record clean energy investments in 2015 drove the largest annual increase ever in renewable power generating capacity, with an estimated 147 gigawatts (GW) added to the global grid – suggesting that by the end of 2015, renewable capacity could shoulder 23.7 percent of global electricity requirements.

"What is truly remarkable about these results is that they were achieved at a time when fossil fuel prices were at historic lows, and renewables remained at a significant disadvantage in terms of government subsidies," said REN21 executive secretary Christine Lins. "For every dollar spent boosting renewables, nearly 4 dollars were spent to maintain our dependence on fossil fuels."


Every year Mary Meeker does an Internet Trends Report and presentation - It is usually aimed at commercial and marketing clients - but it’s a MUST VIEW for anyone interested in the current state of the Internet. This is a 25 min video.

Internet trends report | Mary Meeker, KPMG | Code Conference 2016

Published on 1 Jun 2016
Kleiner Perkins Caufield & Byers partner Mary Meeker delivers her annual internet trends report. She says "easy growth is behind us" as the newest internet users are coming from less developed and less affluent countries. Meeker also delves into artificial intelligence, Snapchat brand integrations, changes to live sports viewing habits, car industry innovation and the rise of millennial consumers, among many other topics. Visit Recode.net to see all 213 of Meeker's slides and follow along.
The full presentation is here as a pdf:


For Fun
This has been getting quite a bit of viral coverage - but it’s fascinating and important. I like it because I love Blade Runner and the videos included are worth the view (remember this is early days) but this is important because it reveals issues of intellectual property and learning - no artist or creator can create without exposure to cultural-scientific commons - and the issue of learning is inseparable from doing. Whether the learning is by computers or humans - what creation emerge will always owe a debt to previous learning.

A guy trained a machine to "watch" Blade Runner. Then things got seriously sci-fi.

Last week, Warner Bros. issued a DMCA takedown notice to the video streaming website Vimeo. The notice concerned a pretty standard list of illegally uploaded files from media properties Warner owns the copyright to — including episodes of Friends and Pretty Little Liars, as well as two uploads featuring footage from the Ridley Scott movie Blade Runner.

Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn't actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.

Instead, it was part of a unique machine-learned encoding project, one that had attempted to reconstruct the classic Philip K. Dick android fable from a pile of disassembled data.


This is a two year old article - but worth the read & thought.

The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots

If and when we finally encounter aliens, they probably won’t look like little green men, or spiny insectoids. It’s likely they won’t be biological creatures at all, but rather, advanced robots that outstrip our intelligence in every conceivable way. While scores of philosophers, scientists and futurists have prophesied the rise of artificial intelligence and the impending singularity, most have restricted their predictions to Earth. Fewer thinkers—outside the realm of science fiction, that is—have considered the notion that artificial intelligence is already out there, and has been for eons.

Susan Schneider, a professor of philosophy at the University of Connecticut, is one who has. She joins a handful of astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper “Alien Minds," written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think.

The reason for all this has to do, primarily, with timescales. For starters, when it comes to alien intelligence, there’s what Schneider calls the “short window observation”—the notion that, by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology. It’s a twist on the belief popularized by Ray Kurzweil that humanity’s own post-biological future is near at hand.

“As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI,” Shostak said. “At that point, soft, squishy brains become an outdated model.”

No comments:

Post a Comment