Friday, January 23, 2015

Friday Thinking, 23 January 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

But that first time I checked my email from my sofa, with no wires going in the computer, I understood. From now on, the Internet wasn't going to be a thing I went somewhere to use. It would be all around us, like the ether, filling every space (except for this room).
I knew it wouldn't happen immediately, but it was going to happen. And I loved that feeling of being privy to what would happen next.

These glimpses can be addictive. For some of us, the sensation of having one foot in the future is what first attracted us to technology, or scifi.

But these glimpses are also deceptive. … Whenever we try to predict what it's actually going to be like to live in that future, what the future is going to tastelike, we invariably fail, and in the most ridiculous ways.

It's like a weird law of nature. We can see the technologies coming, but that knowledge somehow makes the future less predictable.

Perhaps we predict the Roomba a hundred years in advance, but set it in a world where women are still wearing crinoline and whalebone corsets.
Or else we correctly predict that the Encyclopedia Britannica will one day fit on the head of a pin, never imagining that the Britannica itself will have become a relic, replaced by the free, collaborative, sprawling something called Wikipedia.

Such predictions aren't wrong, they're "not even wrong", they miss the basic point. The future makes fools of us all.
MACIEJ CEGLOWSKI - OUR COMRADE THE ELECTRON



Insisting on the "Intelligence" framework obscures the ways that power, money and influence are being re-distributed by modern computational services. That is bad. It's beyond merely old-fashioned; frankly, it's becoming part of a sucker's game. Asking empathic questions about Apple Siri's civil rights, her alleged feelings, her chosen form of governance, what wise methods she herself might choose to re-structure human society—that tenderness doesn't help. It's obscurantist. Such questions hide what is at stake. They darken our understanding. We will never move from the present-day Siri to a situation like that…

What would really help would be some much-improved, up-dated, critically informed language, fit to describe the modern weird-sister quartet of Siri, Cortana, Now and Echo, and what their owners and engineers really want to accomplish, and how, and why, and what that might, or might not mean to our own civil rights, feelings, and forms of governance and society. That's today's problem. Those are tomorrow's problems, even more so. Yesterday's "Machines That Think" problem will never appear upon the public stage. The Machine That Thinks is not a Machine. It doesn't Think. It's not even an actress. It's a moldy dress-up chest full of old, mouse-eaten clothes.
Bruce Sterling - Thinking Machines

The most important thing about making machines that can think is that they will think different.
Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence "general purpose" because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we'll come to realize that human thinking is not general at all. It is only one species of thinking.

The kind of thinking done by the emerging AIs in 2014 is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans can do, they don't do it in a human-like fashion. Facebook has the ability to ramp up an AI that can start with a photo of any person on earth and correctly identifying them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this ability very un-human. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don't think like us. One of the advantages of having AIs drive our cars is that they won't drive like humans, with our easily distracted minds.

Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can't think. ...in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve.
Kevin Kelly - Call Them Artificial Aliens



Machines that think? That's as fallacious as people that think! Thinking involves processing information, begetting new physical order from incoming streams of physical order. Thinking is a precious ability, which unfortunately, is not the privilege of single units, such as machines or people, but a property of the systems in which these units come to "life."
Cesar Hidalgo - Machines Don't Think, But Neither Do People



If we aren’t busy disrupting ourselves - there’s someone out there getting ready to disrupt you - as noted by Google and many others. A very important and famous quote:
"The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett
This short video may sound like lots of hype and the author - Peter Diamandis is definitely pushing his book - but it’s still worth the listen - just to be reminded of what exponential thinking is about.
BOLD - Video - 6 Stages of Exponential Growth



Ok if you bothered to watch the exponential growth video - this next series of articles about AI and the Blockchain are VITAL to anyone who wants to get the tiniest capacity to understand the huge Tsunami of disruption that is hovering to impact how we work and how we measure, account and use exchange currencies in the next 10-15 years.


Before we go off with scary exponential thinking about AI - this is an excellent balanced discussion of the ongoing media conversation. “the ability to design an AGI may lag far behind the computing power required to run one”
There is a very important consideration to hold in mind in this line of thinking. The article discusses AI in quite a siloed way - not considering corresponding likely advances in other fields such as bio and cognitive technologies that could produce combinatory innovations - that would either speed things up or bring new forms of human-technology integration.
Brooks and Searle on AI volition and timelines
Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher Stuart Russell in “Transcending complacency on superintelligent machines” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and a number of journalists, scientists, and technologists have subsequently chimed in. Given the topic’s complexity, I’ve been surprised by the positivity and thoughtfulness of most of the coverage (some overused clichés aside).

Unfortunately, what most people probably take away from these articles is ‘Stephen Hawking thinks AI is scary!’, not the chains of reasoning that led Hawking, Russell, or others to their present views. When Elon Musk chimes in with his own concerns and cites Bostrom’s book Superintelligence: Paths, Dangers, Strategies, commenters seem to be more interested in immediately echoing or dismissing Musk’s worries than in looking into his source.

The end result is more of a referendum on people’s positive or negative associations with the word ‘AI’ than a debate over Bostrom’s substantive claims. If ‘AI’ calls to mind science fiction dystopias for you, the temptation is to squeeze real AI researchers into your ‘mad scientists poised to unleash an evil robot army’ stereotype. Equally, if ‘AI’ calls to mind your day job testing edge detection algorithms, that same urge to force new data into old patterns makes it tempting to squeeze Bostrom and Hawking into the ‘naïve technophobes worried about the evil robot uprising’ stereotype.

Thus roboticist Rodney Brooks’ recent blog post “Artificial intelligence is a tool, not a threat” does an excellent job dispelling common myths about the cutting edge of AI, and philosopher John Searle’s review of Superintelligence draws out some important ambiguities in our concepts of subjectivity and mind; but both writers scarcely intersect with Bostrom’s (or Russell’s, or Hawking’s) ideas. Both pattern-match Bostrom to the nearest available ‘evil robot panic’ stereotype, and stop there.

Brooks and Searle don’t appreciate how new the arguments in Superintelligence are. In the interest of making it easier to engage with these important topics, and less appealing to force the relevant technical and strategic questions into the model of decades-old debates, I’ll address three of the largest misunderstandings one might come away with after seeing Musk, Searle, Brooks, and others’ public comments: conflating present and future AI risks, conflating risk severity with risk imminence, and conflating risk from autonomous algorithmic decision-making with risk from human-style antisocial dispositions.


Here’s a long article from Edge.Org asking an annual question - with short response by about 180 of the world’s top thinking on the question of AI - definitely worth the scan.
2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."   
THE EDGE QUESTION—2015
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker,Blade Runner, 2001, Her, The Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2105:


This is the first of a four part series (so far) on Google and search by Steven Levy. Each ‘part’ is long but all Four Parts are a MUST READ for anyone who wants to know where we really are and how fast AI is moving now. It time we began to think of just how smart YouTube, Google Search, Google Now, Android, Chrome, Google+ will be by 2020 and by 2025.
The NeverEnding Search - Google Still in Search
Google’s flagship product has been part of our lives for so long that we take it for granted. But Google doesn’t. Part One of a study of Search’s quiet transformation.
Why is the sky blue?
Children often ask this question, and very few parents are able to provide the answer unaided. Not too long ago, finding the correct reply would have taken, at the very least, a dive into the encyclopedia, maybe even a trip to the library. In more recent years, moms and dads have simply rushed to the computer, fired up Google, assessed the links presented in response to the question, quickly read through the explanations, and parsed it so their rug rat could understand it.

But in 2015, even that seemingly expedited process won’t do. For one thing, questions are less likely to be typed into a search field than dictated to a mobile device. And while selecting the most relevant of a ranked list of links remains a valid approach for certain queries, people who ask questions with well-defined answers — like this one — have learned to expect answers right now. They are disappointed, even angry, when Google cannot provide it.

So . . . . “Okay, Google….why is the sky blue?”
It takes less than a second for an Android phone to respond to this spoken query in an intelligible, but obviously automated voice,
“A clear cloudless day-time sky is blue because molecules in the air scatter blue light from the sun more than they scatter red light.”


Here’s a government initiative from the UK - this is worth the look for anyone interested in policy around open data.
Open data
Accessible, machine-readable data about services
Data that is truly open is:
  • accessible (ideally via the internet) at no more than the cost of reproduction, without limitations based on user identity or intent
  • in a digital, machine-readable format for interoperation with other data
  • free of restriction on use or redistribution in its licensing conditions
The data that you should make open
Overall, government produces a lot of data that describes the services that we offer and how well those services are performing, for example data from analytics tools or key performance indicators. There’s also data on how people use these services and who those people are.
Data about service performance allows service managers to see how well a service is running. It also means that users can hold us to account. Data about service performance should therefore be public data. You should publish all public data, unless it is private data collected from people or restricted for national security reasons.

Public data is anonymised data:
  • on which public services are run and assessed
  • on which policy decisions are based
  • is collected or generated in the course of your service delivery
If for some reason you have made a procurement choice that means your performance data is monitored or stored by a third party, you should make sure that you have the right to access, export, share and reuse that data openly and in an open format.


Speaking of open data - here’s a very simple, accessible description of the BitCoin protocol - called the Blockchain, and it potential for disrupting not just the world of finance of any single source of authority. This should be a MUST READ for anyone interested in the future of currency, contracts, organizations and corresponding transactions.
There’s a blockchain for that!
The code that secures Bitcoin could also power an alternate Internet. First, though, it has to work.
There’s this hopelessly geeky new technology. It’s too hard to understand and use. How could it ever break the mass market? Yet developers are excited, venture capital is pouring in, and industry players are taking note. Something big might be happening.
That is exactly how the Web looked back in 1994 — right before it exploded. Two decades later, it’s beginning to feel like we might be at a similar liminal moment. Our new contender for the Next Big Thing is the blockchain — the baffling yet alluring innovation that underlies the Bitcoin digital currency.

Wait a minute and I’ll explain exactly how the blockchain works. (Or at least try.) For now, think of it as a way of transferring a digital message from one party to another, where both parties can count on the integrity of the message, even when they don’t trust, or even know, each other. Right now, these messages are mostly virtual cash. But they could be any kind of information.

At root, the blockchain is all about replacing the servers that power today’s online world with computing power and storage that we all share. Every network requires what programmers call a “single source of truth” — the authority that says, “this is real,” “this user is who she claims to be,” “this transaction occurred.” To date, we have depended on servers run by corporations and governments to provide our single sources of truth. Even the Internet itself uses a handful of root servers to make the domain-name system work.

The blockchain turns the entire network into its source of truth. It’s a mechanism for us to collectively confer legitimacy on one another. That’s why it appeals to the same people who fell in love with the Internet and the Web 20 years ago: No individual or company owns it, and anyone can participate in it.


Moving forward from Bitcoin - here’s something that is not just pushing the frontier in thought - but being applied to the ‘Nation State’. This is a must read for anyone interested in imagining the future of currency and the economy.
What is FIMKRYPTO?
FIMKRYPTO, abbreviated FIMK, plural FIM, is "2.5G" peer to peer payment system software which contains embedded cryptocurrency protocol. Bitcoin was the first successful cryptocurrency however technology has evolved so much that currency features are only a fragment of modern cryptoprotocols.
Why 2.5G?
Bitcoin was the first cryptoprotocol of the 1st generation and its purpose was originally limited to being a payment instrument. New protocols have been built around Bitcoin's idea of decentralized peer to peer payment network. FIMK is based on one of the most promising adaptations of the p2p protocol, NXT. Unlike many other alternative cryptoprotocols, NXT is not technically based on bitcoin. NXT's source code, architecture and algorithms are completely original, thus it can be called a "2G"-cryptosystem. The FIMK system contains some significant changes that can metaphorically be counted as a leap of half a generation, thus we call FIMK a "2.5G"-cryptosystem.
How does the basic income of FIMK work?
Krypto FIN ry pays basic income to all identified citizens of Finland who opt in and are more than 15 years of age. Basic income payments begin during 2014 starting with 100 FIM per month. The monthly amount may be adjusted after a period according to vote of the board of the association, and will be varying according to resources available and number of opted in citizens, among other aspects. Preliminary calculations show that Krypto FIN ry will be able to pay basic income for several years and may continue as long as distribution of extra block rewards from the genesis block continue, nearly four years.


Here is a good article that explains some of the interesting aspects of the blockchain idea.
The Blockchain is the New Database, Get Ready to Rewrite Everything
If you understand the core innovations around the blockchain idea, you’ll realize that the technology concept behind it is similar to that of a database, except that the way you interact with that database is very different.

The blockchain concept represents a paradigm shift in how software engineers will write software applications in the future, and it is one of the key concepts behind the Bitcoin revolution that needs to be well understood. In this post, I’d like to explain 5 of these concepts, and how they interrelate to one another in the context of this new computing paradigm that is unravelling in front of us. They are: the blockchain, decentralized consensus, trusted computing, smart contracts and proof of work / stake. This computing paradigm is important, because it is a catalyst for the creation of decentralized applications, a next-step evolution from distributed computing architectural constructs.

Here’s a 2015 prediction about the blockchain from Forbes. There are a number of great short video in this article that are worth the view.
Tech 2015: Block Chain Will Break Free From Bitcoin To Power Distributed Apps
In many ways, the crypto currency Bitcoin had a very bad year in 2014. It has fallen from a value of over $1,000 per Bitcoin last January to just over $300 today. This volatility and the attendant speculation has led some critics, notably Nobel Prize-winning economist Paul Krugman, to consider the cash alternative to be a form of “the long con.” In positive economic terms, he questions whether Bitcoin can be a stable store of value.

I often agree with Krugman, but I sense that his deeper unease is with what he terms the “normative economics:” is Bitcoin a good idea for society? He identifies his unease with the observation that, “Bitcoin fever was and is intimately tied up with libertarian anti-government fantasies.” It is hard to ignore the libertarian agenda behind Bitcoin and yet it is also hard for me not to see programmable currency as a necessary building block of the 21st century global economy.

My use of the word “block” is intended to evoke the foundational technology that underpins Bitcoin, the block chain. For the uninitiated (that’s most of us), there is a public ledger of Bitcoin transactions that are published to a distributed network of participating computers. Once every minute or so a batch of transactions are released in a “block” to be validated by the network and added to the “block chain.” A unique cryptographic “hash” identifies each block and transaction and permanently fixes them in chronological order….


If we think about how we are connected to the world today - via desktop and mobile - here’s something to think about in the next decade.
A Brain-Computer Interface That Works Wirelessly
A wireless transmitter could give paralyzed people a practical way to control TVs, computers, or wheelchairs with their thoughts.
A few paralyzed patients could soon be using a wireless brain-computer interface able to stream their thought commands as quickly as a home Internet connection.
After more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

The new interface does away with much of that wiring by processing brain data inside a device about the size of an automobile gas cap. It is attached to the skull and wired to electrodes inside the brain. Inside the device is a processor to amplify the faint electrical spikes emitted by neurons, circuits to digitize the information, and a radio to beam it a distance of a few meters to a receiver. There, the information is available as a control signal; say to move a cursor across a computer screen.


Here’s an interesting study on the power of teams - that might be applicable to other forms of analysis and research.
Teams better than individuals at intelligence analysis, research finds
When it comes to predicting important world events, teams do a better job than individuals, and laypeople can be trained to be effective forecasters even without access to classified records, according to new research published by the American Psychological Association.
According to the authors, the study findings challenge some common practices of the U.S. intelligence community, where professional analysts usually specialize in one topic or region and send reports up the chain of command. In what the authors call the first scientific study of its kind, researchers identified common characteristics that improved predictions by amateur participants in a geopolitical forecasting tournament. The contest was sponsored by the Intelligence Advanced Research Projects Activity (IARPA), an agency within the Office of the Director of National Intelligence that funds research to improve intelligence practices.

"Teams could share information and discuss their rationales but still submit anonymous forecasts," said Barbara Mellers, PhD, one of the lead researchers and a psychology and marketing professor at the University of Pennsylvania. "This type of teamwork that protects dissent is really important, and I don't think it's being used to the full extent that it should be in the intelligence community."

The most accurate forecasters in the tournament were better at pattern detection, cognitive flexibility, knowledge of geopolitics and open-mindedness, including a willingness to consider unorthodox outcomes, the study found. "They would consider ideas and possibilities that were different from their pet theories or beliefs," Mellers said.


Here is some recent research from the PEW Internet and Society Project - very interesting - worth the read.
Social Media and the Cost of Caring
For generations, commentators have worried about the impact of technology on people’s stress. Trains and industrial machinery were seen as noisy disruptors of pastoral village life that put people on edge. Telephones interrupted quiet times in homes. Watches and clocks added to the de-humanizing time pressures on factory workers to be productive. Radio and television were organized around the advertising that enabled modern consumer culture and heightened people’s status anxieties.

Inevitably, the critics have shifted their focus onto digital technology. There has been considerable commentary about whether internet use in general and social media use in particular are related to higher levels of stress. Such analysts often suggest that it is the heaviest users of these technologies that are most at risk. Critics fear that these technologies take over people’s lives, creating time pressures that put people at risk for the negative physical and psychological health effects that can result from stress.

This research explores whether the use of social media, mobile phones and the internet is associated with higher levels of stress. In a Pew Research Center survey of 1,801 adults, we asked participants about the extent to which they felt their lives were stressful, using an established scale of stress called the Perceived Stress Scale (PSS).

This scale is based on people’s answers to 10 questions that assess whether they feel that their life is overloaded, unpredictable and uncontrollable. Perceived stress, as measured through the PSS, can be viewed as an assessment of the risk that people face for psychological disorders related to stress, such as anxiety and depression, as well as physical illnesses, such as cardiovascular disease and susceptibility to infectious diseases. ….

...The survey analysis produced two major findings that illustrate the complex interplay of digital technology and stress:
Overall, frequent internet and social media users do not have higher levels of stress. In fact, for women, the opposite is true for at least some digital technologies. Holding other factors constant, women who use Twitter, email and cellphone picture sharing report lower levels of stress.

At the same time, the data show there are circumstances under which the social use of digital technology increases awareness of stressful events in the lives of others. Especially for women, this greater awareness is tied to higher levels of stress and it has been called “the cost of caring.” Stress is not associated with the frequency of people’s technology use, or even how many friends users have on social media platforms. But there is one way that people’s use of digital technology can be linked to stress: Those users who feel more stress are those whose use of digital tech is tied to higher levels of awareness of stressful events in others’ lives. This finding about “the cost of caring” adds to the evidence that stress is contagious.

How can it be that social media use is not directly associated with stress, but for some, social media use can still lead to higher levels of stress?
The answer: The relationship between stress and social media use is indirect. It is the social uses of digital technologies, and the way they increase awareness of distressing events in others’ lives, that explains how the use of social media can result in users feeling more stress.


This is a worthwhile 12 min video - that provides some concise insight on persuation and how marketers are using it on us.
Science Of Persuasion
This animated video describes the six universal Principles of Persuasion that have been scientifically proven to make you most effective based on the research in Dr. Cialdini’s groundbreaking book, Influence. This video is narrated by Dr. Robert Cialdini and Steve Martin, CMCT.

Dr. Robert Cialdini, Professor Emeritus of Psychology and Marketing, Arizona State University has spent his entire career researching the science of influence earning him a worldwide reputation as an expert in the fields of persuasion, compliance, and negotiation.


Here’s an upgrade coming soon to an Android device near you - if it isn’t here already. Using your camera to ‘see’ signs in another language and translate them.
Google Translate gets smarter with language detection, Word Lens
Star Trek's universal translator is here, and it's on your phone. Google is updating its Translate app on Wednesday, and, as rumored, the new version includes automatic language detection in conversation mode, so having a conversation between two people who don't speak the same language is actually possible.

Once you've selected the two languages being spoken, Google Translate can now tell which one is being spoken at any moment. With no need to manually toggle them, conversations can be more natural.

The update to Google Translate also integrates Word Lens, which instantaneously translates written text. Previous versions required the user to take a picture of text and mark which words they wanted translated; Word Lens means you only need to hold the phone up so the text is visible onscreen, and the app will translate the words before your eyes.


More on the frontiers of biotechnology and the domestication of DNA.
FIRST CONTRACTING HUMAN MUSCLE GROWN IN LABORATORY
Researchers at Duke University report the first lab-grown, contracting human muscle, which could revolutionize drug discovery and personalized medicine.
In a laboratory first, Duke researchers have grown human skeletal muscle that contracts and responds just like native tissue to external stimuli such as electrical pulses, biochemical signals and pharmaceuticals.

The lab-grown tissue should soon allow researchers to test new drugs and study diseases in functioning human muscle outside of the human body.
The study was led by Nenad Bursac, associate professor of biomedical engineering at Duke University, and Lauran Madden, a postdoctoral researcher in Bursac’s laboratory. It appears January 13 in the open-access journal eLife.

“The beauty of this work is that it can serve as a test bed for clinical trials in a dish,” said Bursac. “We are working to test drugs’ efficacy and safety without jeopardizing a patient’s health and also to reproduce the functional and biochemical signals of diseases—especially rare ones and those that make taking muscle biopsies difficult.”


Last week there was an article about how 2015 there would be human trials of nanobots to perform types of cancer treatments.
Here’s something about the use of domesticated DNA to do a lot more.
DNA Origami Could Lead to Nano “Transformers” for Biomedical Applications
Tiny hinges and pistons hint at possible complexity of future nano-robots
The project is the first to prove that the same basic design principles that apply to typical full-size machine parts can also be applied to DNA—and can produce complex, controllable components for future nano-robots.

In a paper published this week in the Proceedings of the National Academy of Sciences, Ohio State mechanical engineers describe how they used a combination of natural and synthetic DNA in a process called “DNA origami” to build machines that can perform tasks repeatedly.

“Nature has produced incredibly complex molecular machines at the nanoscale, and a major goal of bio-nanotechnology is to reproduce their function synthetically,” said project leader Carlos Castro, assistant professor of mechanical and aerospace engineering. “Where most research groups approach this problem from a biomimetic standpoint—mimicking the structure of a biological system—we decided to tap into the well-established field of macroscopic machine design for inspiration.”

“In essence, we are using a bio-molecular system to mimic large-scale engineering systems to achieve the same goal of developing molecular machines,” he said.
Ultimately, the technology could create complex nano-robots to deliver medicine inside the body or perform nanoscale biological measurements, among many other applications. Like the fictional “Transformers,” a DNA origami machine could change shape for different tasks.


If anyone is interested in the old business model of how ‘hits’ are produced - how the music business is less concerned with artists integrity and more concerned with mass manufacturing of incantations of consumption this is a must read.
Why Do All Records Sound the Same?
Desperate to get their music on the radio at all costs, record labels are employing powerful software to artificially sweeten it, polish it, make it louder— squeezing out the last drops of its individuality
There was once a little-watched video on Maroon 5's YouTube channel which documents the tortuous, tedious process of crafting an instantly-forgettable mainstream radio hit.

It’s fourteen minutes of elegantly dishevelled chaps sitting in leather sofas, playing $15,000 vintage guitars next to $200,000 studio consoles, staring at notepads and endlessly discussing how little they like the track (called “Makes Me Wonder”), and how it doesn’t have a chorus. Even edited down, the tedium is mind-boggling as they play the same lame riff over and over and over again. At one point, singer Adam Levine says: “I’m sick of trying to engineer songs to be hits.” But that’s exactly he proceeds to do.

Playlists of Hot Adult Contemporary stations are determined by a computer, most likely running Google-owned Scott SS32 radio automation suite, which shuffles the playlist of 400 to 500 tracks, inserts ads and idents and tells the DJ when to talk. The playlist is compiled after extensive research. Two or three times a year, a company like L.A.-based Music Research Consultants Inc arrive in town, hire a hotel ballroom or lecture theatre and recruit 50 to 100 people, carefully screened for demographic relevance (they might all be white suburban housewives aged 26–40). They’re each given $65 and a perception analyzer—a little black box with one red knob and an LED display. Then, they’re played 700 seven-second clips of songs. If they turn the knob up, the song gets played. If they turn it down, it doesn’t.

Why does most music sound the same these days? Because record companies are scared, they don’t want to take risks, and they’re doing the best they can to generate mainstream radio hits. That is their job, after all. And as the skies continue to darken over the poor benighted business of selling music, labels are going to cling to what they know more fiercely than ever.


Here is another dimension we should be concerned with regarding the misuse of incumbent power - whether it be issues like libel, or copyright chill or secret surveillance. The full report is downloadable.
Global Chilling: The Impact of Mass Surveillance on International Writers
a new report demonstrating the damaging impact of surveillance by the United States and other governments on free expression and creative freedom around the world. The report’s revelations, based on a survey of nearly 800 writers worldwide, are alarming.

Concern about surveillance is now nearly as high among writers living in democracies (75%) as among those living in non-democracies (80%). The levels of self-censorship reported by writers living in democratic countries are approaching the levels reported by writers living in authoritarian or semi-democratic countries. And writers around the world think that mass surveillance has significantly damaged U.S. credibility as a global champion of free expression for the long term. On the basis of the survey findings, PEN urges the newly seated U.S. Congress to put reform of mass surveillance programs that violate constitutional and international human rights at the top of its to-do list.


With the dramatic change in oil prices - this looks like it’s really good news.
Investment in renewable energy soars for first time in three years
Solar power is expected to reach price parity with coal-generated electricity by 2016
Demand for solar power grew 16% year-over-year in 2014, representing 44 billion watts (gigawatts) of capacity purchased during the year. Worldwide solar demand in 2015 is projected to be 51.4GW, compared with 39GW in 2014, according to a recent report by Bloomberg New Energy Finance.

China, the United States and Japan will represent 57% of the overall uptick in solar demand in 2015.
Part of that growth is because the cost to deploy renewable energy has plummeted over the past few decades, and in many regions has achieved price parity with traditional energy generation.

Today, 10 U.S. states boast solar energy costs that are on par with those of conventional electricity generation methods, such as coal-fired power plants. Those states are Arizona, California, Connecticut, Hawaii, Nevada, New Hampshire, New Jersey, New York, New Mexico and Vermont.

The average cost of solar panels has gone from $76.67 per watt in 1977 to just 61 cents per watt today, according to PVinsights.


In light of the above this looks like an interesting confirmation
Germany Reaches New Levels of Greendom, Gets 31 Percent of Its Electricity From Renewables
As Europe struggles to ease its dependency on Russian gas, Germany is getting ever greener: During the first half of 2014, the nation generated 31 percent of its electricity from renewable energy sources, according to a recent report by the Fraunhofer Institute (PDF). Excluding hydro, renewables accounted for 27 percent of electricity production, up from 24 percent last year.

“Solar and wind alone made up a whopping 17 percent of power generation, up from around 12 percent to 13 percent in the past few years,” according to Renewables International, which provides a helpful rundown of the Fraunhofer report. The country’s solar power plants increased total production by 28 percent compared with the first half of 2013, while wind power grew about 19 percent.

Germany still derives most of its energy from coal, though consumption of brown coal dropped 4 percent. Power from natural gas fell 25 percent, while nuclear power decreased by only about 2 percent.


If there is a shift in ‘energy regime’ in the next decade that also means a huge shift in geopolitics - something that this fact we’ve known for awhile will also give an extra shove to.
Half global wealth held by the 1%
Oxfam warns of widening inequality gap, days ahead of Davos economic summit in Switzerland
Billionaires and politicians gathering in Switzerland this week will come under pressure to tackle rising inequality after a study found that – on current trends – by next year, 1% of the world’s population will own more wealth than the other 99%.

Ahead of this week’s annual meeting of the World Economic Forum in the ski resort of Davos, the anti-poverty charity Oxfam said it would use its high-profile role at the gathering to demand urgent action to narrow the gap between rich and poor.

The charity’s research, published today, shows that the share of the world’s wealth owned by the best-off 1% has increased from 44% in 2009 to 48% in 2014, while the least well-off 80% currently own just 5.5%.

Oxfam added that on current trends the richest 1% would own more than 50% of the world’s wealth by 2016.
Winnie Byanyima, executive director of Oxfam International and one of the six co-chairs at this year’s WEF, said the increased concentration of wealth seen since the deep recession of 2008-09 was dangerous and needed to be reversed.


And finally - here some evidence of As Above So Below.
Astrophysicists Prove That Cities On Earth Grow in the Same Way As Galaxies in Space

The way galaxies evolve from variations in matter density in the early universe is mathematically equivalent to the way cities grow from changes in population density on Earth, say cosmologists.
Urban sociologists have long known that a set of remarkable laws govern the large-scale interaction between individuals such as the probability that one person will befriend another and the size of the cities they live in.

The latter is an example of the Zipf’s law. If cities are listed according to size, then the rank of a city is inversely proportional to the number of people who live in it. For example, if the biggest city in the US has a population of 8 million people, the second-biggest city will have a population of 8 million divided by 2, the third biggest will have a population of 8 million divided by 3 and so on.

This simple relationship is known as a scaling law and turns out to fit the observed distribution of city sizes extremely well.
Another interesting example is the probability that one person will be friends with another. This turns out to be inversely proportional to the number of people who live closer to the first person than the second.

What’s curious about these laws is that although they are widely accepted, nobody knows why they are true. There is no deeper theoretical model from which these laws emerge. Instead, they come simply from the measured properties of cities and friendships.
Today, all that changes thanks to the work of Henry Lin and Abraham Loeb at the Harvard-Smithsonian Centre for Astrophysics in Cambridge. These guys have discovered a single unifying principle that explains the origin of these laws.