Friday, January 9, 2015

Friday Thinking, 9 January 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J


In 2013, Google stated that Translate served 200 million people, daily.
Dissolving National Borders


...Any new service environment such as that created by railways or motor cars or telegraph or radio, deeply modifies the very nature and image of the people who use them. Radical changes of identity happening in very sudden, brief intervals of time have proved more deadly and destructive to human values than were wars fought with hardware weapons.

In the electric age the alteration of human identity by new service environments of information have left whole populations without personal or community values to a degree that far exceeds the effects of food and fuel and energy shortages. I am suggesting that the Club of Rome is really talking to the old 19th Century situation of quantity and hardware and ignoring completely the effect of software information on the human psyche. The rip-off of human psychic resources by new media may far exceed the dangers involved in energy shortages from hardware….
Marshall McLuhan - Man and Media - 1979


Strategy is really about building algorithms (rules) that help drive optimal outcomes in decisions. Basically you’ve identified what you ultimately want to accomplish and strategy is how you drive towards that. The challenge, as outlined here, is that nothing’s ever quite as neat and tidy as we might hope. So rather than talking about strategy as a blueprint (or similar), which suggests everything is in its perfect place down to the millimeter, it’s better to think about it as an algorithm that helps you make the right decision as you traverse whatever landscape you happen to encounter. Algorithms are nothing more than a set of rules applied to any information or situation, which takes into account what you understand about the data and tries to find the best answer according to the ideas/ideals the programmer has set forth.

No strategy survives the battlefield. By viewing strategy as an algorithm, or set of simple rules, dispersed teams can make faster and more aligned decisions given their unique context. Moreover, expressing your strategy as an evolutionary algorithm gives it the ability to adapt and learn over time faster than through typical command-and-control.

Traditional strategic planning is a worthwhile exercise to evaluate your current market position and available resources. It’s a terrible way to prepare for the future. It’s no surprise that 70–90% of all strategic plans eventually fail.
Noah Brier - quoted in
Organizing for the Unpredictable


Most of us grew up in and around organizations that fit a common template.  Strategy gets set at the top. Power trickles down. Big leaders appoint little leaders. Individuals compete for promotion. Compensation correlates with rank. Tasks are assigned. Rules proscribe actions. Managers assess performance.  This is the recipe for “bureaucracy,” the 150-year old mashup of military command structures and industrial engineering that constitutes the operating system for virtually every large-scale organization on the planet.

Large organizations of all types suffer from an assortment of congenital disabilities that no amount of incremental therapy can cure.  First, they are inertial.  They are frequently caught out by the future and seldom change in the absence of a crisis.  Deep change, when it happens, is belated and convulsive, and typically requires an overhaul of the leadership team. Absent the bloodshed, the dynamics of change in the world’s largest companies aren’t much different from what one sees in a poorly-governed, authoritarian regime—and for the same reason:  there are few, if any, mechanisms that facilitate proactive bottom-up renewal.

Second, large organizations are incremental. Despite their resource advantages, incumbents are seldom the authors of game-changing innovation. It’s not that veteran CEOs discount the value of innovation; rather, they’ve inherited organizational structures and processes that are inherently toxic to break-out thinking and relentless experimentation. Strangely, most CEOs seem resigned to this fact, since few, if any, have tackled the challenge of innovation with the sort of zeal and persistence they’ve devoted to the pursuit operational efficiency. Their preferred strategy seems to be to acquire young companies that haven’t yet lost their own innovation mojo (but upon acquisition most likely will).

And finally, large organizations are emotionally sterile. Managers know how to command obedience and diligence, but most are clueless when it comes to galvanizing the sort of volunteerism that animates life on the social web.  Initiative, imagination and passion can’t be commanded—they’re gifts. Every day, employees choose whether to bring those gifts to work or not, and the evidence suggests they usually leave them at home.  In Gallup’s latest 142-country survey on the State of the Global Workplace, only 13% of employees were truly engaged in their work. Imagine, if you will, a car engine so woefully inefficient that only 13% of the gas it consumes actually combusts. That’s the sort of waste we’re talking about. Large organizations squander more human capability than they use.

Inertial.  Incremental.  Insipid.  As the winds of creative destruction continue to strengthen, these infirmities will become even more debilitating. Few companies, though, have made much progress in eradicating them.  Most of the recommended remedies—idea wikis, business incubators, online collaboration, design thinking, “authentic” leadership, et al—are no more than minor tweaks.  They are unlikely to be any more effective than the dozens of “fixes” that came before them. Remember T-groups, total quality management, skunk works, high performance teams, “intrapreneurship,” re-engineering, the learning organization, communities of practice, knowledge management, and customer centricity?  All of these were timely, and a few genuinely helpful, but none of them rendered organizations fundamentally more adaptable, innovative or engaging.  Band-Aids®, braces and bariatric surgery don’t fix genetic disorders.

And what about ideology? Business people typically regard themselves as pragmatists, individuals who take pride in their commonsense utilitarianism.  This is a conceit.  Managers, no less than libertarians, feminists, environmental campaigners and the devotees of Fox News, are shaped by their ideological biases.  So what’s the ideology of bureaucrats?  Controlism.  Open any thesaurus and you’ll find that the primary synonym for the word “manage,” when used as verb, is “control.”  “To manage” is “to control.”

I meet few executives around the world who are champions of bureaucracy, but neither do I meet many who are actively pursuing an alternative. For too long we’ve been fiddling at the margins.  We’ve flattened corporate hierarchies, but haven’t eliminated them.  We’ve eulogized empowerment, but haven’t distributed executive authority.  We’ve encouraged employees to speak up, but haven’t allowed them to set strategy.  We’ve been advocates for innovation, but haven’t systematically dismantled the barriers that keep it marginalized. We’ve talked (endlessly) about the need for change, but haven’t taught employees how to be internal activists. We’ve denounced bureaucracy, but we haven’t dethroned it; and now we must.
Gary Hamel - REINVENTING MANAGEMENT AT THE MASHUP: ARCHITECTURE & IDEOLOGY


That’s why most companies decay slowly over time. They tend to do approximately what they did before, with a few minor changes. It’s natural for people to want to work on things that they know aren’t going to fail. But incremental improvement is guaranteed to be obsolete over time. Especially in technology, where you know there’s going to be non-incremental change.
– Larry Page


Here’s a 30 min video interview with Gary Hamel, that is worth the view.
THE END OF HIERARCHY: NATURAL LEADERSHIP AT W.L. GORE
MIX co-founder Gary Hamel’s conversation with Terri Kelly, president and CEO of W.L. Gore on the celebrated company's long-running experiment in natural leadership and managing without managers.

The 54-year-old maker of Gore-Tex high performance fabrics and some thousand other products—from Elixir guitar strings to Glide dental floss—employs more than 10,000 people in some fifty locations and is consistently ranked one of the best places to work and among the most innovative companies in the world.

At its heart is a management model that’s been ahead of it’s time for more than half a century. Bill Gore set out to build a company that was innovative at its core—and human to its core. In this conversation, Gary and Terri unpack the progressive principles and radical practices at the heart of W.L. Gore’s culture of innovation. How do you transcend the tradeoff between freedom and control? How do you distribute the work of management and build leadership capability across the entire organization? How do you make productive, collaborative decisions? And how do you measure and reward contribution in a peer-regulated organization?
There are a number of other interesting talks from the same conference -
MIX MASHUP 2014: REINVENTING THE TECHNOLOGY OF HUMAN ACCOMPLISHMENT
The MIX Mashup is a gathering of the vanguard of management innovators—pioneering leaders, courageous hackers, and agenda-setting thinkers from every realm of endeavor. It’s three days of the most advanced thinking and ambitious experiments—designed to inspire, equip, and connect participants in the collective project of making our organizations fit for the future—and fit for human beings.
that can be found here - http://mixmashup.org/talks
For example - here’s
THE END OF BUREAUCRACY: WHEN NOBODY (AND EVERYBODY) IS THE BOSS
Morning Star is one of the world’s leading processors of tomatoes—and one of the most progressive models of a self-managed enterprise we’ve seen. The company was founded in 1970 with a distinct philosophy: people are most productive, creative and happy when they have personal control over their own lives. And the best organizations are those in which people are not managed by directive from above but when coordination happens among peers who manage their own relationships and commitments. And, seemingly impossibly, they’ve built a company to bring that philosophy to life: no bosses, no titles, no job descriptions, and a sweeping scope of authority when it comes to making decisions (about hiring, how to spend the company’s money, what direction to take).

Hear Paul, the co-founder of the Self-Management Institute and, until recently, Morning Star’s head of development, describe the company’s extraordinary—and extraordinarily effective—approach to replacing manager-management with peer- and self-management. It’s a dynamic, inventive, productive, and deeply human approach to structuring and managing work—and a powerful alternative to the standard operating system of bureaucracy.


In the world of the speed of the Internet - this is ancient news - maybe we should try to learn it.
For Best Results, Forget the Bonus
“Do this and you’ll get that.” These six words sum up the most popular way in which American business strives to improve performance in the workplace.

And it is very popular. At least three of four American corporations rely on some sort of incentive program. Piecework pay for factory workers, stock options for top executives, banquets and plaques for Employees of the Month, commissions for salespeople — the variations go on and on. The average company now resembles a television game show: “Tell our employees about the fabulous prizes we have for them if productivity improves!”

Most of us, accustomed to similar tactics at home and school, take for granted that incentives in the workplace are successful. After all, such incentives are basically rewards, and rewards work, don’t they?

The answer, surprisingly, is mostly no. While rewards are effective at producing temporary compliance, they are strikingly ineffective at producing lasting changes in attitudes or behavior. The news gets worse. About two dozen studies from the field of social psychology conclusively show that people who expect to receive a reward do not perform as well as those who expect nothing. This result, which holds for all sorts of rewards, people and tasks, is most dramatic when creativity is involved.

Are rewards as ineffective inside the workplace as they are outside it? Apparently so. Despite decades of widespread reliance on pay-for-performance schemes, I know of no controlled study demonstrating that rewards improve the quality of workplace performance on a long-term basis.


I’m sure everyone has heard of Udacity (what inspired the MOOC and Coursera and EdX). Here is their vision for 2015. What is interesting about this is that - it really is the concept that I envisioned as the necessary platform for scaling learning of the 21rst Century post-labor force worker. This is an idea to
A New University, by Industry
The link is to their website, but the content below is from an email newletter I subscribe to.
As we look toward 2015 with excitement, I wanted to look back at what we built in 2014 and share what’s coming.
Our mission remains the same: to offer accessible and highly effective education so our students advance their careers in technology. Why? Because the skills gap keeps growing at an alarming speed, keeping too many on the sidelines or underemployed. We simply cannot accept that when a record number of jobs in technology remain vacant.

In 2014, we took a new approach to better deliver on our mission. Talking to countless employers convinced us to create a new type of a university, a university by industry, auniversity built by Silicon Valley. One that:
  • teaches the skills that industry employers need today.
  • delivers credentials endorsed by employers.
  • provides education at a fraction of the cost and time of traditional schools.
We partnered with industry giants: AT&T, Google, Cloudera, Facebook, Salesforce, and others. Together, we created a new type of credentials, Nanodegrees, to prepare professionals to become Web Developers, Mobile Developers, and Data Analysts. Thousands of students have already enrolled in the Front-End Web Developer and Data Analyst Nanodegree programs. Industry leaders such as AT&T and Capital One are even offering them to their employees.

It’s only the beginning. In 2015, we’ll launch more Nanodegree programs, along with their supporting courses, starting with Full Stack Web Developer in February and iOS Developer in March. We’ll keep innovating, passionately and boldly, to make learning even more engaging and effective.


And here’s a 14 min video (plus transcript) from one of the co-founders of Coursera. “We’re seeing a significant increase over time in both the overall number of users and the number of active users. We’re also seeing a very significant increase in the number of people who pay for verified credentials, which are the Verified Certificates that we offer to people who complete certain courses.”
The Hype is Dead, but MOOCs Are Marching On
Just two years ago, massive open online courses (MOOCs) were all the rage. They were garnering lots of media attention and The New York Times called 2012 “the year of the MOOC.”
Today, though the hype has died down, the world’s largest provider of MOOCs – Coursera – keeps on innovating and developing its online platform to serve millions of learners. Coursera co-founder Daphne Koller, whom Knowledge@Wharton interviewed in November 2012, returned to campus recently to speak about her progress since launching her company 2.5 years ago, and she gives her predictions for what the MOOC landscape will look like in the future. In her interview, Koller also provides an update on how Coursera is staying afloat even though the vast majority of students don’t pay a penny for their education.

If you look at our current demographics, 75% of our users have college degrees. In turn, that means 25% do not, which is still 2.5 million people. That’s an awful lot of people who are getting access to education that otherwise wouldn’t have access.

It’s also important to remember that the 75% of users who have college degrees aren’t necessarily wealthy yuppies working on Wall Street. In many parts of the world, having a college degree is not a guarantee of employability. In some cases, including in developing countries, many colleges offer a very mediocre educational experience.

In other parts of the world, people are finding that the education they received in college 15 to 20 years ago is no longer adequate for jobs in the current economy. They need an educational refresher to access the jobs they want. We hear many stories about people who take these courses, even though they have college degrees, who find the experience to be transformational for their careers and their prospects.

Roughly 70% of people who earn Verified Certificates from either a course or a specialization are posting their credentials on LinkedIn. Coursera is currently the second biggest credential supplier on LinkedIn, right after Microsoft, which is incredible since we’ve only been operating for about 2.5 years. This suggests that prospective employees are seeing value in the credentials.


Now speaking of the scaling of learning and the growing skills-gap - here’s something from PEW Research.
Technology’s Impact on Workers
The internet and cell phones have infiltrated every cranny of American workplaces, and digital technology has transformed vast numbers of American jobs. Work done in the most sophisticated scientific enterprises, entirely new technology businesses, the extensive array of knowledge and media endeavors, the places where crops are grown, the factory floor, and even mom-and-pop stores has been reshaped by new pathways to information and new avenues of selling goods and services. For most office workers now, life on the job means life online.

Pew Research surveyed online a representative sample of adult internet users and asked those who have jobs a series of questions about the role of digital technology in their work lives. This is not a sample representative of allworkers. It covers online adults who also have full- or part-time jobs in any capacity. The most recent survey data from Pew Research in late 2013 shows that 94% of jobholders are internet users and they work in all kinds of enterprises from technology companies to non-technology firms; from big corporations to small proprietor operations; and from those in urban areas, farms, and places in between.


What will this emerging digital environment mean for everyone? Where does science go to meet this future? The rapidly evolving ecosystems associated with personal data is creating an entirely new field of scientific study, say computer scientists. And this requires a much more powerful ethics-based infrastructure. This is an interesting paper describing the emergence of a new science of human-data interaction and “data ecosystem” combining older disciplines such as computer science, statistics, sociology, psychology and behavioural economics and fostering new transdisciplinary approaches.
Human-Data Interaction: The Human Face of the Data-Driven Society
The increasing generation and collection of personal data has created a complex ecosystem, often collaborative but sometimes combative, around companies and individuals engaging in the use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Interaction (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer science, statistics, sociology, psychology and behavioural economics. We expose the challenges that HDI raises, organised into three core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interested parties in the personal and big data ecosystems.


Speaking about human-data interface - here’s a kickstarter that should make us all wonder - what the toys of 2025 will be. The 4 min video is well worth the watch.
JIBO, World's First Family Robot. 4,800 pre-sold!
JIBO Skills - INTELLIGENT & HANDS-FREE
Assistant - Politely reminds you of important tasks and events to help you stay on top of things.
Messenger - Recognizes you and each member of your household, to deliver the right messages to the right people at the right time & place.
EMOTIONAL CONNECTOR
Photographer - Uses natural cues like movement, speech, and smile detection to know when someone’s posing for a picture.
Avatar - See-and-track camera makes it easy to turn and look at people, to support video calling as if you are in the room.
FUN & SUPPORTIVE
Storyteller - Sound effects, graphics and physical movements make a responsive and interactive storytelling experience.
Companion - Physical presence with helpfulness and heart, JIBO will put a smile on your face and make you feel better.


While the focus of this short article is newsroom culture (endangered or not) it has a lot to offer the assessment of any workplace culture.
What defines a healthy newsroom culture?
Earlier this month, I had the honor of conducting a writing workshop in Washington, D.C., for the writers and editors of National Geographic.  It was a kick for me to work with a publication that I had read as a boy, one that, in 1963, had published a photo of my father, a U.S. Customs officer, pasting a sticker on the wooden crate that contained the Mona Lisa as she made her way on a tour of America.

The folks at NatGeo asked some great questions, and I want to answer one of them in this essay.
“You keep talking and asking questions about the ‘culture’ of this place,” asked one young man.   “What do you mean by ‘culture’?”

So I have decided to try a definition of my own in the context of a newsroom:
“The norms, practices, habits and routines of a workplace that create the conditions for excellent or sub-standard work.”

It is often easier to recognize fault lines in a culture from the outside although this must be done with caution. Is there such a thing as an “ideal” culture for a magazine or newsroom?  I cannot answer. The only questions I am qualified to answer are these:  “What is the best culture for me?  What kind of place keeps me happy and productive?”....


Another summary of the last year - this one is about biotechnology - it really is an amazing speed of progress that the last decade has seen.
2014 in Biomedicine: Rewriting DNA, Decoding the Brain, and a GMO Paradox
From genetically modified foods to gene therapy, 2014 was a big year for rewriting biology.
The year began with a landmark event. A decade after the first human genome was decoded at a cost of about $3 billion, the sequencing-machine company Illumina, of San Diego, introduced a new model, the Hyseq X-10, that can do it for around $1,000 per genome.

The system, which costs $10 million and can decode 20,000 genomes a year, was snapped up by large research labs, startup firms like J. Craig Venter’s Human Longevity (which plans to sequence 40,000 people a year), and even by the British government (the U.K. is the first country with a national genome sequencing project).
Francis de Souza, Illumina’s president, predicted that within two years the genomes of about 1.6 million people will have been sequenced.

Cheap sequencing means a deluge of information and a new role for technology designed to handle and exploit “big data.” The search giant Google was the tech company most attuned to the trend, launching a scientific project to collect biological data about healthy humans, and offering to store any genome on its servers for $25 per year. A coalition of genetics researchers backed by Google tried to introduce technical standards, like those that govern the Web, as a way of organizing an “Internet of DNA” over which researchers might share data.


Well technically this article is still 2014 - but it harkens to accelerating progress on the domesticating DNA frontier. The question to keep in mind is where will the frontier be in another decade?
Machine Intelligence Cracks Genetic Controls
Every cell in your body reads the same genome, the DNA-encoded instruction set that builds proteins. But your cells couldn’t be more different. Neurons send electrical messages, liver cells break down chemicals, muscle cells move the body. How do cells employ the same basic set of genetic instructions to carry out their own specialized tasks? The answer lies in a complex, multilayered system that controls how proteins are made.

Most genetic research to date has focused on just 1 percent of the genome—the areas that code for proteins. But new research, published Dec. 18 inScience, provides an initial map for the sections of the genome that orchestrate this protein-building process. “It’s one thing to have the book—the big question is how you read the book,” said Brendan Frey, a computational biologist at the University of Toronto who led the new research.

Frey compares the genome to a recipe that a baker might use. All recipes include a list of ingredients—flour, eggs and butter, say—along with instructions for what to do with those ingredients. Inside a cell, the ingredients are the parts of the genome that code for proteins; surrounding them are the genome’s instructions for how to combine those ingredients.

Just as flour, eggs and butter can be transformed into hundreds of different baked goods, genetic components can be assembled into many different configurations. This process is called alternative splicing, and it’s how cells create such variety out of a single genetic code. Frey and his colleagues used a sophisticated form of machine learning to identify mutations in this instruction set and to predict what effects those mutations have.


And while we’re talking about genetics - this is a long article but it provides a very worthwhile new view of our genetic plasticity.
Solving the Autism Puzzle
For years scientists searched fruitlessly for the causes of autism by looking for genes shared by families prone to the disorder. Now researchers taking a new approach have begun to unlock its secrets.
The findings also provide insight into just why autism is so common. “Let me highlight a critical point, and one of the biggest insights to come from the genetics of autism,” says ­Jonathan Sebat, a professor at the University of California, San Diego, who previously worked in Wigler’s lab and helped to reveal this new genetic landscape. “We did not fully appreciate how plastic the genome is, in the sense of how much new mutation there is. The genome is mutating, evolving constantly, and there’s a steady influx of new mutations in the population. Every child born has roughly 60 new changes in their DNA sequence, and [one in] every 50 children born have at least one large rearrangement. This is a really significant contributor to developmental disorders.”

Another surprising discovery is that certain regions of the human genome seem especially prone to disruption. Not only do some of these genetic “hot spots” seem to be linked to many forms of autism, but some of them have a deep and significant evolutionary history. If you trace them back in time, as Evan Eichler’s laboratory has begun to do, you can begin to glimpse the emergence of precisely the traits that distinguish humans from all other animals. “It’s kind of a crazy idea,” Eichler says, “but it’s like autism is the price we pay for having an evolved human species.”

Copy number variations in one specific hot spot on the short arm of chromosome 16, for example, have been associated with autism. By comparing the DNA of chimpanzees, orangutans, a Neanderthal, and a Denisovan (another archaic human) with the genomes of more than 2,500 contemporary humans, including many with autism, Xander Nuttle, a member of Eichler’s group, has been able to watch this area on the chromosome undergo dramatic changes through evolutionary history. known as BOLA2 that seems to promote instability. Nonhuman primates have at most two copies of the gene; Neanderthals have two; contemporary humans have anywhere from three to 14, and the multiple copies of the gene appear in virtually every sample the researchers have looked at. This suggests that the extra copies of the BOLA2 gene, which predispose people to neurodevelopmental disorders like autism, must also confer some genetic benefit to the human species. Otherwise, evolutionary pressure would have scrubbed the duplications out of the genome. In other words, the same duplications that can lead to autism may also create what Eichler calls genetic “nurseries” in which new gene variants arise that enhance cognition or some other human trait.

At a meeting of the American Society of Human Genetics last fall, Nuttle reported that this mutation-prone region, which contains more than two dozen genes related to neurocognitive function, lies adjacent to an intriguing gene. “The evolutionary twist on this whole story,” says Eichler, “is that our genome is really set up to fail, in the sense that we’re prone to delete and duplicate. The flip side of it is that that selective disadvantage is offset by the emergence of novel genes that have conferred an advantage to us cognitively.”


Speaking of machine intelligence - here’s one cutting edge. At the speed of Moore’s Law the question is where will we be by 2025?
How Google "Translates" Pictures into Words Using Vector Space Mathematics
Google engineers have trained a machine-learning algorithm to write picture captions using the same techniques it developed for language translation.
Translating one language into another has always been a difficult task. But in recent years, Google has transformed this process by developing machine translation algorithms that change the nature of cross cultural communications through Google Translate.

Now that company is using the same machine learning technique to translate pictures into words. The result is a system that automatically generates picture captions that accurately describe the content of images. That’s something that will be useful for search engines, for automated publishing and for helping the visually impaired navigate the web and, indeed, the wider world.
Here is another review of machine vision also from Tech Review.
The Revolutionary Technique That Quietly Changed Machine Vision Forever
Machines are now almost as good as humans at object recognition, and the turning point occurred in 2012, say computer scientists.
Or put another way, It is only a matter of time before your smartphone is better at recognizing the content of your pictures than you are.
In space exploration, there is the Google Lunar X Prize for placing a rover on the lunar surface. In medicine, there is the Qualcomm Tricorder X Prize for developing a Star Trek-like device for diagnosing disease. There is even an incipient Artificial Intelligence X Prize for developing an AI system capable of delivering a captivating TED talk.

In the world of machine vision, the equivalent goal is to win the ImageNet Large-Scale Visual Recognition Challenge. This is a competition that has run every year since 2010 to evaluate image recognition algorithms. (It is designed to follow-on from a similar project called PASCAL VOC which ran from 2005 until 2012).

Contestants in this competition have two simple tasks. Presented with an image of some kind, the first task is to decide whether it contains a particular type of object or not. For example, a contestant might decide that there are cars in this image but no tigers. The second task is to find a particular object and draw a box around it. For example, a contestant might decide that there is a screwdriver at a certain position with a width of 50 pixels and a height of 30 pixels.

Oh, and one other thing: there are 1,000 different categories of objects ranging from abacus to zucchini, and contestants have to scour a database of over 1 million images to find every instance of each object. Tricky!

Computers have always had trouble identifying objects in real images so it is not hard to believe that the winners of these competitions have always performed poorly compared to humans. But all that changed in 2012 when a team from the University of Toronto in Canada entered an algorithm called SuperVision, which swept the floor with the opposition.

Today, Olga Russakovsky at Stanford University in California and a few pals review the history of this competition and say that in retrospect, SuperVision’s comprehensive victory was a turning point for machine vision. Since then, they say, machine vision has improved at such a rapid pace that today it rivals human accuracy for the first time.  Russakovsky and co have compared humans against machines and their conclusion seems inevitable. “Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%,” they say.


Speaking of computational power, where we are now as opposed to 15 years ago - here’s something coming to a new car near you very soon.
Any Driven Sunday: NVIDIA Kicks Off CES by Unveiling Tegra X1, NVIDIA DRIVE Auto Computers
Never come to Las Vegas with your pockets empty.
That’s why we brought along Tegra X1, a mobile super chip that packs a full teraflop of computing power into a slice of silicon no bigger than a thumbnail. How’s that for a game with high table stakes?

In the first big news of this year’s International Consumer Electronics Show, NVIDIA CEO Jen-Hsun Huang unveiled the new 256-core chip, which uses the same Maxwell architecture deployed in the world’s top gaming graphics cards. Slated to arrive in products during the first half of the year, Tegra X1 provides more power than a supercomputer the size of a suburban family home from 15 years ago. Tegra X1 packs more power than the fastest supercomputer of 15 years ago, ASCI Red. Run for 10 years by the U.S. Department of Energy’s Sandia National Laboratory, ASCI Red was the first teraflops supercomputer system. ASCI Red occupied 1,600 square feet and gulped 500,000 watts of power. By contrast Tegra X1 sips less than 15 watts of power.

“Your future cars will be the most advanced computers in the world… There will be more computing horsepower inside a car than anything you own today.” ...Powered by dual Tegra X1 processors, DRIVE PX, with inputs for 12 high-resolution camera, promises to make driving safer and more enjoyable by introducing Surround-Vision and Auto-Valet capabilities.


Speaking about cars - here’s some ground-breaking news (in case you haven’t heard it already).
Toyota releases fuel cell patents for royalty-free use to all
Toyota just rocked the auto industry by announcing that it is opening to the public 5,680 of its patents related to fuel cell technology for royalty-free use.

Bob Carter, the company's senior vice-president of automotive operations, delivered the news on Monday at CES, following an elaborate presentation that touted the strengths of its fuel cell vehicle, the Toyota Mirai.
There was a collective gasp from the audience after Carter's announcement, likely because the decision could help jumpstart this area of the automotive industry, which is exactly what Toyota is counting on.

By eliminating traditional corporate boundaries, we can speed the development of new technologies, and move into the future of mobility more quickly, effectively and economically," Carter said.


This is a longish article that explores the experience of being driven in a self-driving car.
I Rode 500 Miles in a Self-Driving Car and Saw the Future. It’s Delightfully Dull
I was a few hours outside of Los Angeles, tooling down I-5 at the wheel of a sleek Audi A7 on a gorgeous day when a little girl in an SUV smiled and waved. I waved back.
With both hands.

This immediately freaked her out, and she started jumping up and down. All I could do was laugh, knowing my vigorous wave was in no way a safety hazard. In fact, I hadn’t touched the steering wheel in more than an hour.

What that little girl didn’t know, despite the stickers on the car, was that I was piloting Audi’s latest autonomous vehicle, a prototype designed specifically to handle the monotony of highway driving. The, er, driving was not nearly so difficult as the preparation—an arduous task that required a day of training in Arizona, a ream of paperwork and a little bureaucratic wrangling that resulted in the great state of California issuing me a license to operate an autonomous vehicle.

And so it was that I found myself riding along in the car of tomorrow on an autonomous road trip from Palo Alto, California to Las Vegas, where Audi is showing off autonomous tech that may be in showrooms by the end of the decade.


Here are the top picks by Peter Diamandis - @XPRIZE, @SingularityU, @PlanetaryRsrcs & Author of Abundance... Passionate about innovation and creating a world of Abundance. Investor in @Humin.
My Top Tech Picks for 2015
If you thought 2014 was thrilling, here’s a look at what I’m most excited about for 2015…
With CES happening next week, Abundance 360 at the end of January, and A360-digital kicking off in March, here are 11 of the most exciting new technologies moving from deceptive to disruptive this year.
1.Virtual Reality
2. Mass-market robots
3. Autonomous vehicles
4. Drones everywhere
5. Wireless power
6. Data & machine learning
7. Large-scale genome sequencing and data mining
8. Sensor explosion
9. Voice-control and “language-independent” interaction
10. 3D Printing
11. Bitcoin


Here’s something about materials - that may be on the market very soon.
15 times stronger than steel: Scientists develops strongest, lightest glass nanofibres in the world
The University of Southampton's Optoelectronics Research Centre (ORC) is pioneering research into developing the strongest silica nanofibres in the world.

Globally the quest has been on to find ultrahigh strength composites, leading ORC scientists to investigate light, ultrahigh strength nanowires that are not compromised by defects. Historically, carbon nanotubes were the strongest material available, but high strengths could only be measured in very short samples just a few microns long, providing little practical value.

Now research by ORC Principal Research Fellow Dr Gilberto Brambilla and ORC Director Professor Sir David Payne has resulted in the creation of the strongest, lightest weight silica nanofibres - 'nanowires' that are 15 times stronger than steel and can be manufactured in lengths potentially of 1000's of kilometres.


For Fun & Cool
The 2 min video is just cool.
The Cubli: a cube that can jump up, balance, and 'walk'
The Cubli is a 15 × 15 × 15 cm cube that can jump up and balance on its corner. Reaction wheels mounted on three faces of the cube rotate at high angular velocities and then brake suddenly, causing the Cubli to jump up. Once the Cubli has almost reached the corner stand up position, controlled motor torques are applied to make it balance on its corner. In addition to balancing, the motor torques can also be used to achieve a controlled fall such that the Cubli can be commanded to fall in any arbitrary direction. Combining these three abilities -- jumping up, balancing, and controlled falling -- the Cubli is able to 'walk'.


Here’s a 3 min video on more progress in the domain of 3D printing. Still early days but moving fast.
Voxel8: The World's First 3D Electronics Printer
Voxel8 has created the world's first 3D electronics printer from the ground-up. Novel conductive materials and 3D printing technology from the Lewis Research Group at Harvard University. New software crafted for the Voxel8 printer called Project Wire by Autodesk.  
The Voxel8 printer truly allows you to combine electronics with novel mechanical forms.


Here’s a fun article about how today matches the vision of 2015 portrayed in the 1989 movie “Back To the Future II”. Nice pictures.
The Actual Future Is So Much Cooler Than Back to the Future II Predicted
The two big things the classic trilogy missed: The rise of mobile and the Internet
Enough with the flying cars and Hoverboards already. There's way more to the future than a few cool ways to zip from one place to the next. I know this because the future, finally, is upon us.

Yes, 2015 is the year Marty McFly visits in Back to the Future II. I've been counting down to this moment since the film came out in 1989. Although only a small fraction of the story is actually set in the year 2015—which was still 30 years away from the 1985 present of the story—it's chockablock with technological predictions about what life would be like today. Many of which (sort of) came true.

But the biggest twin advances filmmakers didn't see coming: mobile Internet technology and the on-demand, personalized culture that formed around it. As a result, the version of 2015 in Back to the Future II will forever seem stuck in the past.

Looking back at an old movie's depictions of the future, people tend to fixate on what filmmakers got right and wrong. But examining how the real world advanced in this time is far more revealing. So we can't cook a pizza in 20 seconds with the (still fictional) Black & Decker hydrator. Hoverboards still haven't replaced skateboards. But life in the actual future is extraordinary. People carry tiny computers in their back pockets. Anyone can publish to a global audience from just about anywhere with the press of a button. Scientists can 3D print working body parts. Humans are able to share and access vast knowledge stores—troves of data, literature, and art—without leaving their homes. Cars may not fly but they can drive themselves. Considering what we can do with today's technology, the 2015 of Back to the Future II seems downright old-school.


And here’s what almost was. This is a lovely 3 min video explaining what could have been to beginning of the Internet.
PicturePhone: How Bell Telephone lost a half billion, but nearly created the internet
How Bell Telephone's PicturePhone, introduced in 1964, flopped yet nearly catalyzed the internet. Technically, it was an amazing achievement: Bell used the existing twisted-pair copper wire of the telephone network -- not broadband lines like today -- to produce black and white video on a screen about five inches square. And, amazingly for the time, it used a CCD-based-camera. It was meant to be the most revolutionary communication medium of the century, driving subscribers to purchase broadband lines, but failed miserably as a consumer product costing Bell a half billion dollars.

No comments:

Post a Comment