Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
ArticlesHow Female ENIAC Programmers Pioneered the Software Industry
“Horizon scanning for emerging technologies is crucial to staying abreast of developments that can radically transform our world, enabling timely expert analysis in preparation for these disruptors. The global community needs to come together and agree on common principles if our society is to reap the benefits and hedge the risks of these technologies,” said Dr Bernard Meyerson, Chief Innovation Officer of IBM and Chair of the Meta-Council on Emerging Technologies.
As Pedro Domingos, author of the popular ML manifesto The Master Algorithm, writes, “Machine learning is something new under the sun: a technology that builds itself.” Writing such systems involves identifying the right data, choosing the right algorithmic approach, and making sure you build the right conditions for success. And then (this is hard for coders) trusting the systems to do the work.
“The more people who think about solving problems in this way, the better we’ll be,” says a leader in the firm’s ML effort, Jeff Dean, who is to software at Google as Tom Brady is to quarterbacking in the NFL. Today, he estimates that of Google’s 25,000 engineers, only a “few thousand” are proficient in machine learning. Maybe ten percent. He’d like that to be closer to a hundred percent. “It would be great to have every engineer have at least some amount of knowledge of machine learning,” he says.
... improved neural-net algorithms along with more powerful computation from the Moore’s Law effect and an exponential increase in data drawn from the behavior of huge numbers of users at companies like Google and Facebook, began a new era of ascendant machine learning.
“The machine learning model is not a static piece of code — you're constantly feeding it data,” says Robson. “We are constantly updating the models and learning, adding more data and tweaking how we're going to make predictions. It feels like a living, breathing thing. It’s a different kind of engineering.”
“It’s a discipline really of doing experimentation with the different algorithms, or about which sets of training data work really well for your use case,” says Giannandrea, who despite his new role as search czar still considers evangelizing machine learning internally as part of his job. “The computer science part doesn’t go away. But there is more of a focus on mathematics and statistics and less of a focus on writing half a million lines of code.”
Complexity studies systems whose elements react to the patterns they create - i.e. to the context they create.
Mr. Toffler was a self-trained social science scholar and successful freelance magazine writer in the mid-1960s when he decided to spend five years studying the underlying causes of a cultural upheaval that he saw overtaking the United States and other developed countries.
The fruit of his research, “Future Shock” (1970), sold millions of copies and was translated into dozens of languages, catapulting Mr. Toffler to international fame. It is still in print.
In the book, in which he synthesized disparate facts from every corner of the globe, he concluded that the convergence of science, capital and communications was producing such swift change that it was creating an entirely new kind of society.
His predictions about the consequences to culture, the family, government and the economy were remarkably accurate. He foresaw the development of cloning, the popularity and influence of personal computers and the invention of the internet, cable television and telecommuting.
“No serious futurist deals in ‘predictions,’” he ... advised readers to “concern themselves more and more with general theme, rather than detail.” That theme, he emphasized, was that “the rate of change has implications quite apart from, and sometimes more important than, the directions of change.”
“We who explore the future are like those ancient mapmakers, and it is in this spirit that the concept of future shock and the theory of the adaptive range are presented here — not as final word, but as a first approximation of the new realities, filled with danger and promise, created by the accelerative thrust.”
It seems discussion about the characteristics of different generations is ubiquitous - Boomers, Gen-X, Millennials - aka generation ‘Y’. But it could be that as change accelerates - trying to rely on broad generational cohort characteristics based on ‘defining events’ and/or technology may not be sound. Here’s an interesting article from Knowledge at Wharton’s
The word millennial now means tech-savvy, and it means entitled, and it means innovative, and it means all sorts of things that aren’t necessarily true.
Oftentimes, what is interesting with the advice that is given about how to deal with millennials is that the advice is pretty relevant for every generation. It’s usually just best practices, good advice; it would be just as valuable for a baby boomer.
Generation Y, aka the millennials, now make up the largest cohort in the workforce, and the people hiring them — and marketing to them — have plenty of preconceived notions about them. But no generation is a monolithic block, and trying to fit all of them into the same pigeonhole does everyone an expensive and often demoralizing disservice, whether it is “cynical” Gen Xers or “tech-averse” members of the Silent Generation.
This is a list of the top 10 emerging technologies from the World Economic Forum - a nice summary.
One of the criteria used by council members during their deliberations was the likelihood that 2016 represents a tipping point in the deployment of each technology. Thus, the list includes some technologies that have been known for a number of years, but are only now reaching a level of maturity where their impact can be meaningfully felt.
A diverse range of breakthrough technologies, including batteries capable of providing power to whole villages, “socially aware” artificial intelligence and new generation solar panels, could soon be playing a role in tackling the world’s most pressing challenges, according to a list published today by the World Economic Forum.
“Technology has a critical role to play in addressing each of the major challenges the world faces, yet it also poses significant economic and social risks. As we enter the Fourth Industrial Revolution, it is vital that we develop shared norms and protocols to ensure that technology serves humanity and contributes to a prosperous and sustainable future,” said Jeremy Jurgens, Chief Information and Interaction Officer, Member of the Executive Committee, World Economic Forum.
The Top 10 Emerging Technologies 2016 list, compiled by the Forum’s Meta-Council on Emerging Technologies and published in collaboration with Scientific American, highlights technological advances its members believe have the power to improve lives, transform industries and safeguard the planet. It also provides an opportunity to debate any human, societal, economic or environmental risks and concerns that the technologies may pose prior to widespread adoption.
Nanosensors and the Internet of Nanothings
Next Generation Batteries
Perovskite Solar Cells
Open AI Ecosystem
Systems Metabolic Engineering
The full paper is here
Linux is far from an ‘emerging’ technology - but this article outlines what every government and the myriad organization within them should begin to implement - an open-source operating systems that transforms IT staff functioning as minions of vendors into a development capacity.
IT is moving to the cloud. And, what powers the cloud? Linux. When even Microsoft's Azure has embraced Linux, you know things have changed.
Like it or lump it, the cloud is taking over IT. We've seen the rise of the cloud over in-house IT for years now. And, what powers the cloud? Linux.
A recent survey by the Uptime Institute of 1,000 IT executives found that 50 percent of senior enterprise IT executives expect the majority of IT workloads to reside off-premise in cloud or colocation sites in the future. Of those surveyed, 23 percent expect the shift to happen next year, and 70 percent expect that shift to occur within the next four years.
This comes as no surprise. Much as many of us still love our physical servers and racks, it often doesn't make financial sense to run your own data center.
It's really very simple. Just compare your capital expense (CAPEX) of running your own hardware versus the operational expenses (OPEX) of using a cloud. Now, that's not to say you want to outsource everything and the kitchen sink, but most of the time and for many of your jobs you'll want to move to the cloud.
In turn, if you're going to make the best use of the cloud, you need to know Linux.
Even Microsoft understands this.
There is lots of discussions these days with the looming disruption of automation, robotics and AI about what will people do? The key suggestion is some sort of guaranteed minimum income for all. Although these suggestions tend to focus on government approaches - this is a weak signal that this is important to the private sector and indicator of the need to support the inherent risks involved in innovation - perhaps a beginning of an inevitable shift in our economic paradigm.
One problem with the potential ‘success metrics’ is whether participants ‘lift themselves out of poverty’. This may not be the best metric for anticipating the fundamental economics shift - a better metric may be a focus on whether participants are enabled to ‘create more value’ - either through better social participation, deeper contributions to the social fabric (e.g. volunteering), contributions to knowledge e.g. enabled dedication to platform like Wikipedia, FoldIt, GalaxyZoo, etc. Our social economies requires many forms of value creation many of which are subject to direct remuneration in money.
A few dozen Oakland residents to get $2,000 per month, no strings, for a year.
Earlier this month, Y Combinator, the famed Silicon Valley incubator dropped a bombshell: it had selected this city to be the home of its new "Basic Income" pilot project, to start later this year.
The idea is pretty simple. Give some people a small amount of money per month, no strings attached, for a year, and see what happens. With any luck, people will use it to lift themselves out of poverty.
In this case, as Matt Krisiloff of Y Combinator Research (YCR) told Ars, that means spending about $1.5 million over the course of a year to study the distribution of "$1,500 or $2,000" per month to "30 to 50" people. There will also be a similar-sized control group that gets nothing. The project is set to start before the end of 2016.
The notion of guaranteed minimum income has been kicking around globally for centuries, especially among 20th century thinkers (Martin Luther King, Jr. famously advocated for it). But it’s only recently that extensive trials have begun in various places, including Canada, the Netherlands, Finland, and now in Oakland. (Another organization, called Give Directly, operates a similar program in Kenya.)
Tapped to run the project is Elizabeth Rhodes, an academic who recently arrived in Oakland. She says the project’s goal is "to empower people and give people the freedom to be able to meet their basic needs."
This is a short blog post by Google - with a link to a more substantive paper outlining five key challenges to ensure safe AI.
We’ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems:
Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.
The full paper is here
This is an very interesting article - in some ways making the case for why we need an intensive, diverse ecology of specialist AI and an AI-ssistant to help us manage our sensors, their data and potential analytics. For social scientists - this is another weak signal that the survey is dead and real-time behavioral data-analysis is the future.
It started when NASA answered a call for a tool to detect dangerous gases and chemicals with a smartphone. The result became a smartphone-linked device that can do, well, just about anything someone can build a sensor for.
When the Department of Homeland Security (DHS) put out its request in 2007, NASA Ames Research Center scientist Jing Li already had a sensor that reacted to various gases and compounds — she’d been working on it for space applications, like evaluating atmospheres on other planets.
But to answer the DHS specs, she needed a way for the device to “sniff” the air for samples and a system that would allow it to interface with a smartphone. Li’s team settled on a small fan to gather the air samples, and approached George Yu of Genel Systems Inc., who was able to deliver the cell phone interface system.
Meanwhile, Li convinced the program manager at DHS that the sensor should be attached to the outside of the phone, instead of being built in. “This is a very new technology, and there will be a lot of iterations. Making it interchangeable will make it easier to update,” she explained.
That decision turned out to be game-changing.
This is a very interesting 15 min video of Valve (the game company) engineers showing the development of prototypes in their VR lab. This is interesting as a view into the process of tinkering and inventing - that happens way before any finished project hits the market - they use 3D printing and ‘junk’ and ‘bowge wires’ that anyone can use in their own garage. This is an inspiration to all ‘Makers’.
This is a fascinating startup that is integrating robots to innovate the pizza and its delivery - a model that is ripe for implementation to all sorts of food preparation and home delivery. Robot cooks and mobile cooking.
Zume, a new startup in Mountain View, is trying to make a more profitable pizza through robotics
In the back kitchen of Mountain View's newest pizzeria, Marta works tirelessly, spreading marinara sauce on uncooked pies. She doesn’t complain, takes no breaks, and has never needed a sick day. She works for free.
Marta is one of two robots working at Zume Pizza, a secretive food delivery startup trying to make a more profitable pizza through machines. It's also created special delivery trucks that will finish cooking pizzas during the journey to hungry customers if approved by the Santa Clara County Department of Environmental Health. Right now Zume is only feeding people in Mountain View, California, but it has ambitions to dominate the $9.7 billion pizza delivery industry.
"We are going to be the Amazon of food," said Zume's co-founder and executive chairman, Alex Garden. Garden, 41, is the former president of Zynga Studios. Before that, he was a general manager of Microsoft's Xbox Live. Garden launched Zume in stealth mode last June, when he began quietly recruiting engineers under a pseudonym and building his patented trucks in an unmarked Mountain View garage. In September, he brought on Julia Collins, a 37-year-old restaurant veteran. She became chief executive officer and a co-founder. Collins was previously the vice president and CEO of Harlem Jazz Enterprises, the holding company for Minton's, a historic Harlem eatery.
This shouldn’t really be a surprise - but maybe in the next decade we’ll see this in a military near by.
In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.
Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.
the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA's human opponents could blink.
Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by subject-matter expert and retired United States Air Force Colonel Gene Lee - who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise - in a high-fidelity air combat simulator.
The artificial intelligence, dubbed ALPHA, was the victor in that simulated scenario, and according to Lee, is "the most aggressive, responsive, dynamic and credible AI I've seen to date."
Details on ALPHA - a significant breakthrough in the application of what's called genetic-fuzzy systems are published in the most-recent issue of the Journal of Defense Management, as this application is specifically designed for use with Unmanned Combat Aerial Vehicles (UCAVs) in simulated air-combat missions for research purposes.
...with most AIs, "an experienced pilot can beat up on it (the AI) if you know what you're doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios."
But, now, it's been Lee, who has trained with thousands of U.S. Air Force pilots, flown in several fighter aircraft and graduated from the U.S. Fighter Weapons School (the equivalent of earning an advanced degree in air combat tactics and strategy), as well as other pilots who have been feeling pressured by ALPHA.
We know how AI is actually writing articles now - but even better for us - if we think of the concept of a personal AI-ssistant and researcher.
Soon you could be chatting with your computer about the morning news. An AI has learned to read and answer questions about a news article with unprecedented accuracy.
Creating AI systems that can learn in the background from humanity’s existing stores of information is one of the big goals of computer science. “Computers don’t have the kind of general knowledge and common sense of how the world works [from reading] about things in novels or watch[ing] sitcoms,” says Chris Manning at Stanford University.
To get a step closer to this, last year, Google’s DeepMind team used articles from the Daily Mail website and CNN to help train an algorithm to read and understand a short story. The team used the bulleted summaries at the top of these articles to create simple interpretive questions that trained the algorithm to search for key points.
Now a group led by Manning has designed an algorithm that beat DeepMind’s results by an impressive 10 per cent on the CNN articles and 8 per cent for Daily Mail stories. It scored 70 per cent overall.
The improvement came through streamlining the DeepMind model. “Some of the stuff they had just causes needless complications,” says Manning. “You get rid of that and the numbers go up.”
Not only is AI writing and summarizing news and science articles - but now we must become aware of their involvement in our political processes - in social media and even via establishment media. This is an academic research paper exploring the recent Brexit vote and the use of bots.
Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.
For most people the discussion about automation has tended to focus on the manual work people do - and the potential for displacing human employment. For example the automation of truck driving the most common job in the US (autonomous transportation). But professions have to be ready to transform themselves by augmenting their capabilities. This article discusses the looming change in how we do science. Professions will have to become comfortable with an ecology of AI-ssistants.
DARPA has a new program called Data-Driven Discovery of Models (D3M). The goal of D3M is to develop algorithms and software to help overcome the data-science expertise gap by facilitating non-experts to construct complex empirical models through automation of large parts of the model-creation process. If successful, researchers using D3M tools will effectively have access to an army of “virtual data scientists,” DARPA stated.
This army of virtual data scientists is needed because some experts project deficits of 140,000 to 190,000 data scientists worldwide in 2016 alone, and increasing shortfalls in coming years. Also, because the process to build empirical models is so manual, their relative sophistication and value is often limited.
D3M aims to develop automated model discovery systems that lets users with subject matter expertise but no data science background create empirical models of real, complex processes.
This capability will enable subject matter experts to create empirical models without the need for data scientists, and will increase the productivity of expert data scientists via automation.
The blockchain is listed as an emerging disruptive technology and there’s an increasing amount being written about it. Here’s another simple explanation. There’s a 2.5 min video and a nice graphic.
Many people know it as the technology behind Bitcoin, but blockchain’s potential uses extend far beyond digital currencies.
Its admirers include Bill Gates and Richard Branson, and banks and insurers are falling over one another to be the first to work out how to use it.
So what exactly is blockchain, and why are Wall Street and Silicon Valley so excited about it?
What is blockchain?
Currently, most people use a trusted middleman such as a bank to make a transaction. But blockchain allows consumers and suppliers to connect directly, removing the need for a third party.
This is an interesting possibility - another weak signal of looming social-economic-political transformations.
The author of "The Lean Startup" and his team are in early talks with the Securities and Exchange Commission
Five years ago, when Eric Ries was working on the book that would become his best-selling entrepreneurship manifesto "The Lean Startup," he floated a provocative idea in the epilogue: Someone should build a new, “long-term” stock exchange. Its reforms, he wrote, would amend the frantic quarterly cycle to encourage investors and companies to make better decisions for the years ahead. When he showed a draft around, many readers gave him the same piece of advice: Kill that crazy part about the exchange. "It ruined my credibility for everything that had come before," Ries said he was told.
Now Ries is laying the groundwork to prove his early skeptics wrong. To bring the Long-Term Stock Exchange to life, he's assembled a team of about 20 engineers, finance executives and attorneys and raised a seed round from more than 30 investors, including venture capitalist Marc Andreessen; technology evangelist Tim O’Reilly; and Aneesh Chopra, the former chief technology officer of the United States. Ries has started early discussions with the U.S. Securities and Exchange Commission, but launching the LTSE could take several years. Wannabe exchanges typically go through months of informal talks with the SEC before filing a draft application, which LTSE plans to do this year. Regulators can then take months to decide whether to approve or delay applications.
If all goes according to plan, the LTSE could be the stock exchange that fixes what Ries sees as the plague of today's public markets: short-term thinking that squashes rational economic decisions. It's the same stigma that's driving more of Silicon Valley's multi-billion-dollar unicorn startups to say they're not even thinking of an IPO. "Everyone's being told, 'Don't go public,'" Ries said. "The most common conventional wisdom now is that going public will mean the end of your ability to innovate."
The domestication of DNA continues to accelerate - here’s an interesting development related to the harnessing of biological processes to manufacture a range of materials and products - and new life.
Mixing different types of modules together can yield a variety of structures, similar to the constructs that can be generated from Lego pieces. By creating a library of the modules, the scientists hope to be able to assemble structures on demand.
Shaped DNA frames that precisely link nanoparticles into different structures offer platform for designing functional nanomaterials
Scientists developed two DNA-based self-assembly approaches for desired nanostructures. The first approach allows the same set of nanoparticles to be connected into a variety of three-dimensional structures; the second facilitates the integration of different nanoparticles and DNA frames into interconnecting modules, expanding the diversity of possible structures. These approaches could enable the rational design of nanomaterials with enhanced or combined optical, electric, and magnetic properties to achieve desired functions.
Because we live in a non-ergodic universe this is a bigger deal than it might seem.
As scientists, we’ve only just begun to investigate what materials can be made on the nanoscale. Screening a million potentially useful nanoparticles, for example, could take several lifetimes. Once optimized, our tool will enable researchers to pick the winner much faster than conventional methods. We have the ultimate discovery tool.
The ability to make libraries of nanoparticles will open a new field of nanocombinatorics, where size — on a scale that matters — and composition become tunable parameters. This is a powerful approach to discovery science.
The discovery power of the gene chip is coming to nanotechnology. A tool to test millions (possibly billions) of different nanoparticles in a rapid manner and at a specific time in order to determine the best particle for a particular use has been developed by researchers from the Northwestern University.
The electrical, optical, mechanical, structural and chemical properties change when the materials get smaller. This offers new possibilities. Identifying what nanoparticle size and composition are optimal for a specific application, like catalysts, electronic devices, biodiagnostic labels, and pharmaceuticals is a difficult task.
This is fascinating - how much of real history is buried? This is worth the read. Near the end of the article it mentions that
the women of ENIAC have become role models for countless women and girls interested in pursuing careers in technical fields.
And this woman’s career also emulates in a fascinating the same pioneering ‘boldly going’.
Adelaide Rhodes, a marine zoologist and bioinformaticist at the Center for Genome Research and Biocomputing at Oregon State University, was inspired by the women ENIAC programmers.
As American women stepped up to support the war effort during WWII, a top-secret Army program picked six female mathematicians to code instructions for ENIAC, the first all-electronic digital computer. Their programming work launched a modern software industry.
When Betty “Jean” (Jennings) Bartik earned her degree in math from a rural Missouri college during WWII, her academic advisor suggested she become a schoolteacher, noting the impact she could have in a small community.
Instead, Bartik, who’d watched the men of her generation head overseas to fight, craved adventure herself. She landed a job in Philadelphia as a human computer and soon joined a select group of female mathematicians hired to calculate weapons trajectories to aid the war effort.
There, she and five other women went on to write instructions for the world’s first all-electronic, programmable computer. It launched the modern software industry and, ultimately, change the world.
Until recently, however, these women were all but forgotten.