Thursday, September 3, 2015

Friday Thinking 4 September 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 
In the 21st Century curiosity will SKILL the cat.


The late mathematician Alan Turning sketched a thought experiment known as the “Turing test” that could theoretically be used to determine whether a machine could think. Turing claimed that any machine capable of convincing someone it is human by responding to a series of questions would, by all measures, be capable of thinking.


As a side note, it’s important to stress that Turing was not claiming that the nature of thinking is universal. The way a human thinks may be different from the way a robot “thinks,” in the same way a bird flies is different from the way an airplane “flies.” Rather, Turing’s general point was that any entity capable of passing a Turing test would be capable of thinking in one form or another.
According to the novelist [Philip K.] Dick, the Turing test placed too much emphasis on intelligence. What actually makes us human is empathy. Without empathy, we are mere autopilot objects projecting into the void.
AI ROBOT THAT LEARNS NEW WORDS IN REAL-TIME TELLS HUMAN CREATORS IT WILL KEEP THEM IN A “PEOPLE ZOO”


HOW satisfied are we with our jobs?
Gallup regularly polls workers around the world to find out. Its survey last year found that almost 90 percent of workers were either “not engaged” with or “actively disengaged” from their jobs. Think about that: Nine out of 10 workers spend half their waking lives doing things they don’t really want to do in places they don’t particularly want to be.


Today, in factories, offices and other workplaces, the details may be different but the overall situation is the same: Work is structured on the assumption that we do it only because we have to. The call center employee is monitored to ensure that he ends each call quickly. The office worker’s keystrokes are overseen to guarantee productivity.


I think that this cynical and pessimistic approach to work is entirely backward. It is making us dissatisfied with our jobs — and it is also making us worse at them. For our sakes, and for the sakes of those who employ us, things need to change.
Rethinking Work

It’s called the blockchain, and it’s the digital ledger software code that powers bitcoin.


Masters is the CEO of Digital Asset Holdings, a New York tech startup. She says her firm is designing software that will enable banks, investors, and other market players to use blockchain technology to change the way they trade loans, bonds, and other assets. If she’s right, she’ll be at the center of yet another whirlwind that will change the markets.


“You should be taking this technology as seriously as you should have been taking the development of the Internet in the early 1990s,” ... “It’s analogous to e-mail for money.”


That’s a bold statement, but Masters isn’t the only voice heralding the coming of the blockchain. The Bank of England, in a report earlier this year, calls it the “first attempt at an Internet of finance,” while the Federal Reserve Bank of St. Louis hails it as a “stroke of genius.” In a June white paper, the World Economic Forum says, “The blockchain protocol threatens to disintermediate almost every process in financial services.”
Blythe Masters Tells Banks the Blockchain Changes Everything

...the most significant obstacle to the successful replication of experiments is the outdated text format of traditional journals: it simply can’t cope with how elaborate experiments have become. “Complexity was always an issue,” he said. “Even when biology was a much smaller enterprise, it relied on a degree of specialized craft in the laboratory. But, since the end of the nineties, we’ve seen a huge influx of new technologies into biology: genomics, proteomics, technologies like microarrays, complex genetic methods, and sophisticated microscopy and imaging techniques.” With every innovative technique, application, or new vender selling a similar but slightly different technology or reagent, the potential for experiment-spoiling variation rises.
How Methods Videos Are Making Science Smarter

This is a great summary of Netflix’ approach to HR - for anyone interested in bringing some innovation to the HR function this is a nice place to start. Each value (listed below) has a very concise explanation/definition
Netflix Values: We Want to Work with People Who Embody These Nine Values
Judgment
Communication
Impact
Curiosity
Innovation
Courage
Passion
Honesty
Selflessness
Here’s a 12 min video of Patty McCord who is the Chief of Talent at Netflix speaking about these values in a talk titled: A Culture of Innovation and Innovating our Culture.
Patty McCord: A Culture of Innovation
The Real Company Values - as opposed to the nice sounding values are shown by who gets rewarded, promoted or let go.

This is a very interesting document coming from the Googleverse. An experiment in a new approach to management - this is a must read. Google has been researching decisioning processes for a long time.
Liquid Democracy is a way of using software to integrate a capacity to delegate voting on a subject by subject basis and well as a deliberative capacity of online consultation for a policy formation process. Ultimately this sort of technology can enable a much more engaged and participatory enhancement to organizational and democratic decisioning.  
Google Votes: A Liquid Democracy Experiment on a Corporate Social Network
Abstract
This paper introduces Google Votes, an experiment in liquid democracy built on Google's internal corporate Google+ social network. Liquid democracy decision-making systems can scale to cover large groups by enabling voters to delegate their votes to other voters. This approach is in contrast to direct democracy systems where voters vote directly on issues, and representative democracy systems where voters elect representatives to vote on issues for them. Liquid democracy systems can provide many of the benefits of both direct and representative democracy systems with few of the weaknesses. Thus far, high implementation complexity and infrastructure costs have prevented widespread adoption. Google Votes demonstrates how the use of social-networking technology can overcome these barriers and enable practical liquid democracy systems. The case-study of Google Votes usage at Google over a 3 year timeframe is included, as well as a framework for evaluating vote visibility called the "Golden Rule of Liquid Democracy".


This paper argues that liquid democracy is a particularly promising form for democratic decision-making systems and building liquid democracy systems on social-networking software is a practical approach. It presents the Google Votes system built on Google's internal corporate Google+ social network. Google Votes is an experiment in applying liquid democracy to decision-making in the corporate environment. It was developed in 20% time (Schmidt and Rosenberg 2014) by a small group of Google engineers. The concepts related to the topic will be described next, followed by an overview of the system and the specific case study of 3 years usage at Google.
Here’s a corresponding Google TechTalk video
Liquid Democracy with Google Votes
ABSTRACT
Google Votes is an experiment in liquid democracy built on Google's internal corporate Google+ social network. A Liquid Democracy system gives all the control of Direct Democracy with the scalability of Representative Democracy. Users can vote directly or delegate power through their social networks. This talk covers user experience aspects of delegated voting and three graph algorithms for flowing votes through a social graph called Tally, Coverage, and Power.
And here’s an article discussing Google’s use of this system internally.
Google bods reform DEMOCRACY in coconut or vitamin water quandry
'Utopian' social network could go official – but then, it is on Google+ ... so
Google has developed an internal utopian voting system for its office events, which its creator hopes to make an official product.


So far 11,000 internal staff have cast some 75,000 votes on Google office events like Halloween contests and building names. Some 4,200 staff voted in a Mircokitchen food event in which vote tallies determined participants.


The system is the brainchild of engineer Steve Hardt and runs on Google Plus (so perhaps its shelf life is limited -Ed), which overcomes previous high complexity and cost issues with a voting format described as a mix between direct and representative democracy without the faults.


Hardt said his project operated on liquid democracy and allowed users to delegate vote allocations to others. They can also cast a vote themselves, if they wish.
"Direct democracy is perfect utopia but it doesn't scale," said Hardt.


"[Representative democracy] ends up with a focus on the candidates, caring about 'does your representative have good hair' ... and you get distortions from elections like 'we can't make that happen, it's an election year'.


"Liquid democracy is designed to remove the shortcomings and take the best of both. People can delegate their votes to others, often people you trust or know, or people they know, and votes can flow on."

It Isn't That You Network,
It's Understanding Why You Network,
Most people don’t know. People that do, change the world.
Perhaps this is why rigid organizational hierarchies are reacting with ever increasing hysteria around needs for control - the fear of innovation - making management less relevant? Dave Snowden, one of the world’s foremost researchers in knowledge management has stated that Twitter is the best KM tool he’s ever used.
Serendipitous exchanges fuel innovation
Well, well, well…someone finally mapped out on Twitter what we intuitively know about how innovations happens:
The more diverse a person’s social network, the more likely that person is to be innovative. A diverse network provides exposure to people from different fields who behave and think differently. Good ideas emerge when the new information received is combined with what a person already knows.


It’s really simple, interactions, not individuals, drive breakthroughs.
A lot of these random interactions happen on online networks, like Twitter. So, MIT conducted a study to find out if Twitter makes employees more innovative. They found that there’s a link between the diversity in people’s Twitter networks & the quality of their ideas
Here’s the MIT research site.
How Twitter Users Can Generate Better Ideas
New research suggests that employees with a diverse Twitter network — one that exposes them to people and ideas they don’t already know — tend to generate better ideas.
Innovations never happen without good ideas. But what prompts people to come up with their best ideas? It’s hard to beat old-fashioned, face-to-face networking. Even Steve Jobs, renowned for his digital evangelism, recognized the importance of social interaction in achieving innovation. In his role as CEO of Pixar Animation Studios (a role he held in addition to being a cofounder and CEO of Apple Inc.), Jobs instructed the architect of Pixar’s new headquarters to design physical space that encouraged staff to get out of their offices and mingle, particularly with those with whom they normally wouldn’t interact. Jobs believed that serendipitous exchanges fueled innovation.


A multitude of empirical studies confirm what Jobs intuitively knew. The more diverse a person’s social network, the more likely that person is to be innovative. A diverse network provides exposure to people from different fields who behave and think differently. Good ideas emerge when the new information received is combined with what a person already knows. But in today’s digitally connected world, many relationships are formed and maintained online through public social media platforms such as Twitter, Facebook and LinkedIn. Increasingly, employees are using such platforms for work-related purposes.


Technology innovation isn’t just a phenomena of the developed world - this is a brief account of Innovation in Africa. One thing to keep in mind is the power of solar energy to bring need electricity everywhere in a decentralized production and distribution of energy for the digital economy and environment.
The Rise Of Silicon Savannah And Africa’s Tech Movement
Nascent as it may be, Sub-Saharan Africa (SSA) does have a promising tech sector — a growing patchwork of entrepreneurs, startups, and innovation centers coalescing country to country.


Kenya is now a recognized IT hub. Facebook recently expanded on the continent. And Silicon Valley VC is funneling into ventures from South Africa to Nigeria.
These pieces are coming together as Africa’s budding tech culture and ecosystem emerge.


The Rise of Silicon Savannah
Most discussions of the origins of Africa’s tech movement circle back to Kenya. From 2007 through 2010 a combination of circumstance, coincidence, and visionary individuals laid down four markers inspiring the country’s Silicon Savannah moniker:
  • Mobile money,
  • A global crowdsourcing app,
  • Africa’s tech incubator model; and
  • A genuine government commitment to ICT policy.


In 2007, Kenyan telecom company Safaricom launched its M-PESA mobile money service to a market lacking retail banking infrastructure yet abundant in mobile phone users. The product converted even the most basic cell phones into roaming bank accounts and money-transfer devices.


Within two years M-PESA was winning international tech awards after gaining nearly six million customers and transferring billions annually. The mobile money service shaped the continent’s most recognized example of technological leapfrogging: launching ordinary Africans without bank accounts right over traditional brick-and-mortar finance into the digital economy.


Shortly after M-PESA’s arrival, political events in Kenya would inspire creation of one of Africa’s first globally recognized apps, Ushahidi. In late 2007, four technologists — Erik Hersman (an American who grew up in Kenya), activist Ory Okolloh, IT blogger Juliana Rotich, and programmer David Kobia—linked up to see what could be done to quell sporadic violence as a results of an inconclusive presidential election.


The Ushahidi software that evolved became a highly effective tool for digitally mapping demographic events. As Kenya shifted back to stability, requests came in from around the globe to adapt Ushahidi for other purposes. By the end of 2008, the app had become Ushahidi the international tech company, which now has multiple applications in more than 20 countries.

Here’s an interesting meta-research that suggests that science remain highly dependent not just on personal knowledge but on a collective tacit knowledge generated and sustained by social practice.
Scientists Replicated 100 Psychology Studies, and Fewer Than Half Got the Same Results
The massive project shows that reproducibility problems plague even top scientific journals
Academic journals and the press regularly serve up fresh helpings of fascinating psychological research findings. But how many of those experiments would produce the same results a second time around?


According to work presented today in Science, fewer than half of 100 studies published in 2008 in three top psychology journals could be replicated successfully. The international effort included 270 scientists who re-ran other people's studies as part of The Reproducibility Project: Psychology, led by Brian Nosek of the University of Virginia.


The eye-opening results don't necessarily mean that those original findings were incorrect or that the scientific process is flawed. When one study finds an effect that a second study can't replicate, there are several possible reasons, says co-author Cody Christopherson of Southern Oregon University. Study A's result may be false, or Study B's results may be false—or there may be some subtle differences in the way the two studies were conducted that impacted the results.


“This project is not evidence that anything is broken. Rather, it's an example of science doing what science does,” says Christopherson. “It's impossible to be wrong in a final sense in science. You have to be temporarily wrong, perhaps many times, before you are ever right.”


Across the sciences, research is considered reproducible when an independent team can conduct a published experiment, following the original methods as closely as possible, and get the same results. It's one key part of the process for building evidence to support theories. Even today, 100 years after Albert Einstein presented his general theory of relativity, scientists regularly repeat tests of its predictions and look for cases where his famous description of gravity does not apply.


"Scientific evidence does not rely on trusting the authority of the person who made the discovery," team member Angela Attwood, a psychology professor at the University of Bristol, said in a statement "Rather, credibility accumulates through independent replication and elaboration of the ideas and evidence."


The Reproducibility Project, a community-based crowdsourcing effort, kicked off in 2011 to test how well this measure of credibility applies to recent research in psychology. Scientists, some recruited and some volunteers, reviewed a pool of studies and selected one for replication that matched their own interest and expertise. Their data and results were shared online and reviewed and analyzed by other participating scientists for inclusion in the large Science study.

Given the above article - here’s something that all science organization should seriously consider - not just to help replication but to to take science publication into the 21st Century - not just including the digital data in the report but publishing methods section as a video. This approach to publication will help scale learning and accelerate the spread of knowledge - which should be the aim of publication. This approach also addresses the some of the implications of tacit knowledge.
How Methods Videos Are Making Science Smarter
Yale University’s Robert Fernandez prepared the lab bench for the camera as a chef might arrange his mise en place, deftly laying out a mini vortex, a fly aspirator, a T-maze, and a thermometer. Step by step, in a precisely edited sequence, Fernandez and his colleagues demonstrated an experiment on the social behavior of fruit flies. The video showed how to anesthetize the flies in ice, how to use the aspirator to collect them, how to agitate some of them in the churning vortex to provoke a stress reaction, and how to manipulate the maze elevator to the choice point, such that the unagitated flies would be encouraged to choose between the two sides of the T shape—a vial recently vacated by their stressed brethren, or a fresh vial. Agitated flies, it turns out, leave behind an odorant that calmer flies instinctively shun. As Anne Simon, the designer of the protocol, told me, the research was designed to improve the understanding of psychiatric illnesses and asocial disorders such as autism, in part by isolating certain genes in those flies that didn’t display normal avoidance behaviors.


Fernandez’s study was filmed, produced, and eventually published, last December, by the Journal of Visualized Experiments. Founded in 2006, JOVE now has a database of more than four thousand videos, with about eighty more added each month. They are usually between ten and fifteen minutes long, and they range in subject from biology and chemistry to neuroscience and medicine. “For a scientist trying to explain a methodology in writing, it’s very difficult to describe all the necessary details of a multi-stage technical process,” JOVE’s co-founder, Moshe Pritsker, told me. “Confusion over the smallest details can result in months of lost effort.” Replicability—researchers’ capacity to reproduce their colleagues’ experimental findings in order to build on them—is a bedrock principle of scientific progress. But copying an experiment often requires visiting the original lab and seeing it performed. Simon’s fruit-fly protocol, for instance, demands that various minutiae be precisely tuned—lighting, temperature, humidity, and even whether you’ve cut new vials from their plastic bags far enough in advance to let out the stale air. “Video makes replication more efficient,” Pritsker said.


The videos can be of particular help to researchers who are not naturally aware of the dexterity that a specific laboratory procedure requires. As Jonathan Butcher, of Cornell’s School of Biomedical Engineering, put it, “Not everybody is intrinsically a good gardener.” Recently, for example, Butcher’s colleagues sent him an e-mail indicating with skepticism that they couldn’t replicate some of his results. The procedure in question required gently swabbing off cells from the lining of blood vessels in the valves of the heart. When Butcher invited the researchers to his lab and watched them try it, he realized that they were oblivious as to how to do it delicately. “They just thought that scraping is scraping,” he said. After they observed a visual demonstration, they were able to replicate the procedure. Rather than repeating this process for numerous investigators, Butcher published a video in JOVE. Since then, he hasn’t been contacted by other doubters. Indeed, Pritsker said, the journal’s sweet spot is anything that requires animal surgery, in which the convenience of visuals veers toward necessity.

This is a fascinating article discussing recent advance in our understanding of DNA and what’s called the nucleome.
The human genome takes shape and shifts over time
DNA twists and turns into interacting sections that determine what a cell does and when
If you could unravel all the DNA in a single human cell and stretch it out, you’d have a molecular ribbon about 2 meters long and 2 nanometers across. Now imagine packing it all back into the cell’s nucleus, a container only 5 to 10 micrometers wide. That would be like taking a telephone cord that runs from Manhattan to San Francisco and cramming it into a two-story suburban house.


Fitting all that genetic material into a cramped space is step one. Just as important is how the material is organized. The cell’s complete catalog of DNA — its genome — must be configured in a specific three-dimensional shape to work properly. That 3-D organization of nuclear material — a configuration called the nucleome — helps control how and when genes are activated, defining the cell’s identity and its job in the body.


Researchers have long realized the importance of DNA’s precisely arranged structure. But only recently have new technologies made it possible to explore this architecture deeply. With simulations, indirect measurements and better imaging, scientists hope to reveal more about how the nucleome’s intricate folds regulate healthy cells. Better views will also help scientists understand the role that disrupted nucleomes play in aging and diseases, such as progeria and cancer.


“It is conceivable that every nuclear process has an element of structure in it,” says molecular geneticist Bing Ren of the University of California, San Diego School of Medicine. “It’s surprising, in fact, that we studied DNA for so long and yet we still have relatively little understanding of its 3-D architecture.”


Make that 4-D. Recent work shows that fully understanding the nucleome requires analysis of its rearrangements in space over time. A cell’s nucleome changes during the course of a single day as the cell responds to its environment.


Last year, the National Institutes of Health launched a five-year, 4-D Nucleome program, committing more than $120 million to identify better tools and techniques for mapping the complexities of the genome’s 4-D structure. Geneticists, molecular biologists, mathematicians, biophysicists and others are now on an ambitious quest to chart the ever-shifting nuclear terrain.  

Early days for bio-3D printing but hugely transformative.
This Low-Cost 3-D Printer Can Produce Human Organs And Bones
Cheaper biofabrication is helping researchers test drugs and treatments—it's way better than testing on a mouse.
At a lab in Philadelphia's Drexel University, a desktop 3-D printer is cranking out miniature samples of bones. In Toronto, another researcher is using the same printer to make living tumors for drug testing. It looks like an ordinary 3-D printer, but instead of plastic, it squirts out living cells.


BioBots, the startup behind the device, wants to change how researchers do biology. "We've been doing experiments on cells in a dish since 1905, and that's still what we're doing today to learn about how things work inside of our body," says Danny Cabrera, CEO of BioBots. "But the body is a three-dimensional structure. Cells in our body are used to interacting with the world in 3-D. The fact that we've been doing biology in 2-D for over 100 years now is sort of limiting."

Everyone knows about Moore’s Law - and recently there’s been questions about whether it can continue. The issue of continuing the exponential trajectory of Computational Power is different than that of more transistors-per-CPU. Here’s one possibility of new computational paradigms
Introducing a Brain-inspired Computer
TrueNorth's neurons to revolutionize system architecture
Six years ago, IBM and our university partners embarked on a quest—to build a brain-inspired machine—that at the time appeared impossible. Today, in an article published in Science, we deliver on the DARPA SyNAPSE metric of a one million neuron brain-inspired processor. The chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt–literally a synaptic supercomputer in your palm.


Along the way—progressing through Phase 0, Phase 1,Phase 2, and Phase 3—we have journeyed from neuroscience to supercomputing, to a new computer architecture, to a new programming language, to algorithms,applications, and now to a new chip—TrueNorth.


Let me take this opportunity to take you through the road untraveled. At this moment, I hope this reflection will incite within you a burning desire to collaborate and partner with us to make the future journey a joint one.

This may not be from a brain inspired computer - but it seems that Watson may be better at what it does than a human brain with significant education and experience. Another occupation ready to be ‘enhanced?’
Speech-classifier program is better at predicting psychosis than psychiatrists
100% accurate
An automated speech analysis program correctly differentiated between at-risk young people who developed psychosis over a two-and-a-half year period and those who did not. In a proof-of-principle study, researchers at Columbia University Medical Center, New York State Psychiatric Institute, and the IBM T. J. Watson Research Center found that the computerized analysis provided a more accurate classification than clinical ratings.  The study, “Automated Analysis of Free Speech Predicts Psychosis Onset in High-Risk Youths,” was published today in NPJ-Schizophrenia.


About one percent of the population between the age of 14 and 27 is considered to be at clinical high risk (CHR) for psychosis.  CHR individuals have symptoms such as unusual or tangential thinking, perceptual changes, and suspiciousness. About 20% will go on to experience a full-blown psychotic episode. Identifying who falls in that 20% category before psychosis occurs has been an elusive goal. Early identification could lead to intervention and support that could delay, mitigate or even prevent the onset of serious mental illness.


Speech provides a unique window into the mind, giving important clues about what people are thinking and feeling. Participants in the study took part in an open-ended, narrative interview in which they described their subjective experiences. These interviews were transcribed and then analyzed by computer for patterns of speech, including semantics (meaning) and syntax (structure).


The analysis established each patient’s semantic coherence (how well he or she stayed on topic), and syntactic structure, such as phrase length and use of determiner words that link the phrases. A clinical psychiatrist may intuitively recognize these signs of disorganized thoughts in a traditional interview, but a machine can augment what is heard by precisely measuring the variables. The participants were then followed for two and a half years.

This is a very interesting development in manufacturing metal products - growing rather than machining.
This startup can grow metal like a tree, and it's about to hit the big time
A little known Seattle startup could do for metal what 3D printing is doing for other materials like plastic.
A Seattle startup has found a way to grow high performance metals in a cheap and energy efficient way, marking an important breakthrough for industries like construction, automotive, and oil and gas.


You can already find some of the metals from seven-year-old company Modumetal on oil rigs off of the Australian and African coasts, as well as the U.S., off of Texas. Those metals can withstand the ocean’s corrosive power for up to eight times longer than conventional materials, according to the company.


On Tuesday, Modumetal took a big step towards its goal of gaining a bigger market for its innovative recipe. The company said that it had raised $33.5 million in funding that will go to increasing production and sales along with developing new uses for its metals.


Modumetal’s CEO and co-founder Christina Lomasney, a physicist who’s spent years working on electrochemistry and advanced materials, told Fortune that the company’s metal growing process is “the ideal way of making materials.” It is similar to the way that “Mother Nature has evolved [growing things] over eons,” she said.

This is a longish article but well worth the read and consideration - it’s not in the long list of diatribes about automation and loss of jobs - but about exponential innovation.
Is a Cambrian Explosion Coming for Robotics?
About half a billion years ago, life on earth experienced a short period of very rapid diversification called the “Cambrian Explosion.” Many theories have been proposed for the cause of the Cambrian Explosion, with one of the most provocative being the evolution of vision, which allowed animals to dramatically increase their ability to hunt and find mates (for discussion, see Parker 2003). Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics—a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion.


How soon might a Cambrian Explosion of robotics occur? It is hard to tell. Some say we should consider the history of computer chess, where brute force search and heuristic algorithms can now beat the best human player yet no chess-playing program inherently knows how to handle even a simple adjacent problem, like how to win at a straightforward game like tic-tac-toe (Brooks 2015). In this view, specialized robots will improve at performing well-defined tasks, but in the real world, there are far more problems yet to be solved than ways presently known to solve them.


Eight Technical Drivers
A number of technologies relevant to the development of robotics are improving at exponential rates. Here, I discuss eight of the most important. The first three technological developments relate to individual robots; the next two relate to connectivity; and the final three relate to the capacities of the Internet that will shape the future of Cloud Robotics.
1) Exponential growth in computing performance.
2) Improvements in electromechanical design tools and numerically controlled manufacturing tools.
3) Improvements in electrical energy storage.
4) Improvements in electronics power efficiency.
5) Exponential expansion of the availability and performance of local wireless digital communications.
6) Exponential growth in the scale and performance of the Internet.
7) Exponential growth of worldwide data storage.
8) Exponential growth in global computation power.

Here’s an article that highlights that automation and AI can augment the human rather than replace the human - enabling new emergent capabilities and more advanced products and services.
Automation in the Newsroom
How algorithms are helping reporters expand coverage, engage audiences, and respond to breaking news
Philana Patterson, assistant business editor for the Associated Press, has been covering business since the mid-1990s. Before joining the AP, she worked as a business reporter for both local newspapers and Dow Jones Newswires and as a producer at Bloomberg. “I’ve written thousands of earnings stories, and I’ve edited even more,” she says. “I’m very familiar with earnings.” Patterson manages more than a dozen staffers on the business news desk, and her expertise landed her on an AP stylebook committee that sets the guidelines for AP’s earnings stories. So last year, when the AP needed someone to train its newest newsroom member on how to write an earnings story, Patterson was an obvious choice.


The trainee wasn’t a fresh-faced j-school graduate, responsible for covering a dozen companies a quarter, however. It was a piece of software called Wordsmith, and by the end of its first year on the job, it would write more stories than Patterson had in her entire career. Patterson’s job was to get it up to speed.


Patterson’s task is becoming increasingly common in newsrooms. Journalists at ProPublica, Forbes, The New York Times, Oregon Public Broadcasting, Yahoo, and others are using algorithms to help them tell stories about business and sports as well as education, inequality, public safety, and more. For most organizations, automating parts of reporting and publishing efforts is a way to both reduce reporters’ workloads and to take advantage of new data resources. In the process, automation is raising new questions about what it means to encode news judgment in algorithms, how to customize stories to target specific audiences without making ethical missteps, and how to communicate these new efforts to audiences.


Automation is also opening up new opportunities for journalists to do what they do best: tell stories that matter. With new tools for discovering and understanding massive amounts of information, journalists and publishers alike are finding new ways to identify and report important, very human tales embedded in big data.

This is very distressing - 21st Century business and 19th Century brutality. This has may make many rethink supporting Amazon with their purchases.
Worse than Wal-Mart: Amazon’s sick brutality and secret history of ruthlessly intimidating workers
You might find your Prime membership morally indefensible after reading these stories about worker mistreatment
When I first did research on Walmart’s workplace practices in the early 2000s, I came away convinced that Walmart was the most egregiously ruthless corporation in America. However, ten years later, there is a strong challenger for this dubious distinction—Amazon Corporation. Within the corporate world, Amazon now ranks with Apple as among the United States’ most esteemed businesses. Jeff Bezos, Amazon’s founder and CEO, came in second in the Harvard Business Review’s 2012 world rankings of admired CEOs, and Amazon was third in CNN’s 2012 list of the world’s most admired companies. Amazon is now a leading global seller not only of books but also of music and movie DVDs, video games, gift cards, cell phones, and magazine subscriptions. Like Walmart itself, Amazon combines state-of-the-art CBSs with human resource practices reminiscent of the nineteenth and early twentieth centuries.


Amazon equals Walmart in the use of monitoring technologies to track the minute-by-minute movements and performance of employees and in settings that go beyond the assembly line to include their movement between loading and unloading docks, between packing and unpacking stations, and to and from the miles of shelving at what Amazon calls its “fulfillment centers”—gigantic warehouses where goods ordered by Amazon’s online customers are sent by manufacturers and wholesalers, there to be shelved, packaged, and sent out again to the Amazon customer.


Amazon’s shop-floor processes are an extreme variant of Taylorism that Frederick Winslow Taylor himself, a near century after his death, would have no trouble recognizing. With this twenty-first-century Taylorism, management experts, scientific managers, take the basic workplace tasks at Amazon, such as the movement, shelving, and packaging of goods, and break down these tasks into their subtasks, usually measured in seconds; then rely on time and motion studies to find the fastest way to perform each subtask; and then reassemble the subtasks and make this “one best way” the process that employees must follow.

This is an amazing breakthrough - a brilliant affordance that is now revealed. Although I’m not sure if Verizon’s approach is the killer app. First there is the trust of Verizon to consider and next there is the cost. That said a separate device with an app that uses a person's existing phone (and service) and provides encrypted (or good enough) privacy - that's what the market needs. I wonder who else will grasp the adjacent possibles now within reach.
Verizon’s New Service Aims to Connect Older Cars, Keep Them Humming
After a brief delay, Verizon is ready with a service designed to bring some connected car features to older vehicles.
The $14.99-per-month service, dubbed Hum, offers roadside assistance, car diagnostics and help locating a mechanic. A companion app also lets users track vehicle records and helps drivers remember where they parked and track the time on their parking meters.
The service, originally known as Verizon Vehicle, debuted in January at the North American Auto Show and was supposed to launch by June.


Powering the service are two pieces of hardware — a bluetooth speaker that mounts on a visor and a wireless modem that plugs into the diagnostic port included on most cars built since 1996.


Because the kit has its own modem, Hum subscribers don’t have to be Verizon phone customers. The name change was designed, in part, to make that distinction clear.
Verizon is not alone in tapping the OBD-II port to help wire up older cars, though different companies are using it in different ways. The port, originally designed to help mechanics diagnose mechanical problems, is being tapped by Metromile to offer usage-based insurance, while Mojio has a device that connects cars to a range of different applications.

The saga of Bitcoin continues, here’s a very interesting article about the evolution of Bitcoin and its version of the blockchain.
The Looming Problem That Could Kill Bitcoin
The man who took over stewardship of Bitcoin from its mysterious inventor says the currency is in serious trouble.
The way things are going, the digital currency Bitcoin will start to malfunction early next year. Transactions will become increasingly delayed, and the system of money now worth $3.3 billion will begin to die as its flakiness drives people away. So says Gavin Andresen, who in 2010 was designated chief caretaker of the code that powers Bitcoin by its shadowy creator. Andresen held the role of “core maintainer” during most of Bitcoin’s improbable rise; he stepped down last year but still remains heavily involved with the currency (see “The Man Who Really Built Bitcoin”).


Andresen’s gloomy prediction stems from the fact that Bitcoin can’t process more than seven transactions a second. That’s a tiny volume compared to the tens of thousands per second that payment systems like Visa can handle—and a limit he expects to start crippling Bitcoin early in 2016. It stems from the maximum size of the “blocks” that are added to the digital ledger of Bitcoin transactions, the blockchain, by people dubbed miners who run software that confirms Bitcoin transactions and creates new Bitcoin (see “What Bitcoin Is and Why It Matters”).


Andresen’s proposed solution triggered an uproar among people who use or work with Bitcoin when he introduced it two weeks ago. Rather than continuing to work with the developers who maintain Bitcoin’s code, Andresen released his solution in the form of an alternative version of the Bitcoin software called BitcoinXT and urged the community to switch over. If 75 percent of miners have adopted his fix after January 11, 2016, it will trigger a two-week grace period and then allow a “fork” of the blockchain with higher capacity. Critics consider that to be a reckless toying with Bitcoin’s future; Andresen, who now works on Bitcoin with the support of MIT’s Media Lab, says it is necessary to prevent the currency strangling itself. He spoke with MIT Technology Review’s San Francisco bureau chief, Tom Simonite.

No comments:

Post a Comment