Thursday, November 17, 2016

Friday Thinking 18 Nov. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.


“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:


In many ways, there has never been a better time to be alive. Violence plagues some corners of the world, and too many still live under the grip of tyrannical regimes. And although all the world’s major faiths teach love, compassion and tolerance, unthinkable violence is being perpetrated in the name of religion.

And yet, fewer among us are poor, fewer are hungry, fewer children are dying, and more men and women can read than ever before. In many countries, recognition of women’s and minority rights is now the norm. There is still much work to do, of course, but there is hope and there is progress.

How strange, then, to see such anger and great discontent in some of the world’s richest nations. In the United States, Britain and across the European Continent, people are convulsed with political frustration and anxiety about the future.

Why?
A small hint comes from interesting research about how people thrive. In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. This speaks to a broader human truth: We all need to be needed.

Being “needed” does not entail selfish pride or unhealthy attachment to the worldly esteem of others. Rather, it consists of a natural human hunger to serve our fellow men and women. As the 13th-century Buddhist sages taught, “If one lights a fire for others, it will also brighten one’s own way.”

Dalai Lama: Behind Our Anxiety, the Fear of Being Unneeded




A FULLY CONNECTED WORLD NO LONGER NEEDS A MIDDLE CLASS.
This scenario was easy enough to predict back in the late 1980s. What’s been more difficult to handle has been watching the middle class disintegrate in real time. But since few are in denial about the middle class’s impending doom, the magic thing about the present moment is that everyone everywhere is, with great anxiety, trying to figure out what comes next. What comes next I would call the “blank-collar class.” It’s not Fordist blue-collar. It’s not “Hi! It’s 1978 and I’m a travel agent!” white-collar. Blank collar means this – and listen carefully because this is the rest of your life – if you don’t possess an actual skill (surgery, baking, plumbing) then prepare to cobble together a financial living doing a mishmash of random semi-skilled things: massaging, lawn-mowing, and babysitting the children of people who actually do possess skills, or who own the means of production, or the copper mine, or who are beautiful and charismatic. And here’s the clincher: The only thing that is going to make any of this tolerable is that you have uninterrupted high-quality access to smoking hot Wi-Fi. Almost any state of being is okay in the twenty-first century as long as you can remain connected to this thing that turns you into something that is more than merely human.

DOUGLAS COUPLAND - BOHEMIA = UTOPIA?



“We were raised to believe that democracy, and even the democracy that we have, is a system that has somehow inherent good to it,” he added. But it’s not just democracy that fails. “Hierarchical organizations are failing in the response to decision-making challenges. And this is true whether we’re talking about dictatorships, or communism that had very centralized control processes, and for representative democracies today. Representative democracies still focus power in one or few individuals. And that concentration of control and decision-making makes those systems ineffective.”

Society Is Too Complicated to Have a President, Complexity Suggests




This is a perfect conversation - blending an informed, science-literate politician, scientist and journalist to discuss the future of AI.
Obama - what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the govern­ment needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.
The analogy that we still use when it comes to a great technology achievement, even 50 years later, is a moon shot. And somebody reminded me that the space program was half a percent of GDP. That doesn’t sound like a lot, but in today’s dollars that would be $80 billion that we would be spending annually … on AI. Right now we’re spending probably less than a billion.

OBAMA: You’re exactly right, and that’s what I mean by redesigning the social compact. Now, whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years. You’re also right that the jobs that are going be displaced by AI are not just low-skill service jobs; they might be high-skill jobs but ones that are repeatable and that computers can do. What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have.

Barack Obama Talks AI, Neural Nets, Self-Driving-Cars, and the Future of the World

OBAMA: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence. Because the question is, how do we build societal values into AI?



The conversations around the future of work seem to be increasingly including serious considerations of a universal livable income - including many so-called mega capitalists.
"There’s a pretty good chance we end up with a universal basic income, or something like that, due to automation," said Musk. "I'm not sure what else one would do. That’s what I think would happen."

Elon Musk thinks universal income is answer to automation taking human jobs

Tech innovators in the self-driving car and AI industries talk a lot about how many human jobs will be innovated out of existence, but they rarely explain what will happen to all those newly jobless humans. As usual, Tesla and SpaceX founder Elon Musk responds to an obvious question with an answer that may surprise some.

In an interview with CNBC on Friday, Musk said that he believes the solution to taking care of human workers who are displaced by robots and software is creating a (presumably government-backed) universal basic income for all.


Here’s an article from Knowledge@Wharton’s indicating a changing economic conditions - especially for younger workers.
“Working for a big, stable company would have typically been seen as a fantastic career decision — there’s opportunity for advancement and good wages,” Cobb says. “That no longer seems to be the case. The advantages of working in a large firm have really declined in some meaningful ways.”

Why It No Longer Pays to Work for a Larger Firm

A job at a large company used to bring with it several advantages, not the least of which was generally higher pay than similar employees working at a smaller firm. Called the firm-size wage effect, the phenomenon has been extensively studied by economists and sociologists as it has eroded in the last three decades and affected everything from employer-employee relationships to income inequality.

But what was less known is what segment of workers suffered the most under the erosion of the wage effect, and how much that erosion exacerbated the growing income inequality in the U.S.

New research co-authored by Wharton management professor Adam Cobb has provided the answer. In his paper, “Growing Apart: The Changing Firm-Size Wage Effect and Its Inequality Consequences,” Cobb and co-author Ken-Hou Lin of the University of Texas at Austin found that workers in the middle and bottom of the wage scales felt the biggest negative effects from the degradation of the link between firm size and wages. Those at the top of the wage scale, however, experienced no loss in the large-firm wage premium.

Moreover, the uneven erosion of the firm-size wage effect explains around 20% of rising wage inequality during the study period of 1989 to 2014 — a testament, Cobb says, to the impact large firms have on rising inequality.


The challenges of technology and the digital environment have to include the influences of ideology in creating frames that either polarize us or help us find common ground. The last few decades seem to have been dominated with frames that polarize us rather than enable common ground - this may not be true everywhere - but certainly America is a dramatic example. This article is an excellent study with clear and powerful visuals is a Must Read for any one interested in change within political-economies.

Political Polarization in the American Public

How Increasing Ideological Uniformity and Partisan Antipathy Affect Politics, Compromise and Everyday Life
Republicans and Democrats are more divided along ideological lines – and partisan antipathy is deeper and more extensive – than at any point in the last two decades. These trends manifest themselves in myriad ways, both in politics and in everyday life. And a new survey of 10,000 adults nationwide finds that these divisions are greatest among those who are the most engaged and active in the political process.


This is a nice summary of work by Yaneer Bar-Yam on society as a complex system - one that requires new forms of governance. A must read for anyone interested in knowledge management and organizational governance within a context of increasing complexity.
It is absurd, then, to believe that the concentration of power in one or a few individuals at the top of a hierarchical representative democracy will be able to make optimal decisions on a vast array of connected and complex issues that will certainly have sweeping and unintended ramifications on other parts of human civilization.

“There’s a natural process of increasing complexity in the world,” Bar-Yam told me. “And we can recognize that at some point, that increase in complexity is going to run into the complexity of the individual. And at that point, hierarchical organizations will fail.”

Society Is Too Complicated to Have a President, Complex Mathematics Suggest

Human society is simply too complex for representative democracy to work. The United States probably shouldn’t have a president at all, according to an analysis by mathematicians at the New England Complex Systems Institute.

NECSI is a research organization that uses math cribbed from the study of physical and chemical systems—bear with me for a moment—and newly available giant data sets to explain how events in one part of the world might affect something seemingly unrelated in another part of the world.

Most famously, the institute’s director, Yaneer Bar-Yam, predicted the Arab Spring several weeks before it happened. He found that seemingly unrelated policy decisions—ethanol subsidies in the US and the deregulation of commodity markets worldwide—led to skyrocketing food prices in 2008 and 2011. It turns out that there is a very neat correlation between the United Nations food price index and unrest and rioting worldwide that no one but Bar-Yam had picked up.


The selfie is often considered a signal of an increasing form of narcissism - yet the selfie only exists as an experience of sharing an ‘I am because we are’ sort of re-identification of a social self. This is a worthwhile read for those interested in how the digital environment is contributing to an emergent condition of identity as processes of identification.  
“Rather than thinking of a ‘digital camera,’ I’d suggest that one should think about the image sensor as an input method, just like the multi-touch screen,” Evans writes. “That points not just to new types of content but new interaction models.”

The central self-narrating truth of post-modernity is being made real through technology. To “just be yourself” is not only to stand unvarnished in harsh light, but to put on dog ears and perfect skin in one message, a flaming fireball face the next, sponsored troll hair the message after that, and so on.

Forget drones and spaceships: The Snapchat dogface filter is the future

Think of the training data that Snapchat has had: teens sending pictures of themselves to each other as messages. If people are talking in pictures, they need those pictures to be capable of expressing the whole range of human emotion. Sometimes you want clear skin and sometimes you want deer ears. Lenses let us capture ourselves as we would like to be transmitted, rather than what the camera actually sees.

Which is why the Snapchat camera dominates among teens: A Snapchattian camera understands itself as a device that captures and processes images for the purpose of transmission within a social network. And it’ll bring any technology to bear within the product to make that more fun and interesting and engaging.

Being a camera company in this modern age means helping a picture-taker shape the reality that the camera perceives. We don’t just snap pure photos anymore. The moments we capture do not require fidelity to what actually was. We point a camera at something and then edit the images to make sure that the thing captured matches our mood and perceptions. In fact, most images we take with our cameras are not at all what we would see with our eyes. For one, our eyes are incredible visual instruments that can do things no camera can. But also, we can crank the ISO on fancy cameras to shoot in the dark. Or we can use slo-mo. Or Hyperlapse. Or tweak the photos with the many filter-heavy photo apps that rushed in after Instagram. Hell, the fucking Hubble Space Telescope pictures of the births of stars are as much the result of processing as they are the raw data that’s captured.


In the next few years this capability may well likely radically change how we make films and videos. This is a must see 7 min video.

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

This demo video is purely research-focused and we would like to clarify the goals and intent of our work. Our aim is to demonstrate the capabilities of modern computer vision and graphics technology, and convey it in an approachable and fun way. We want to emphasize that computer-generated videos have been part in feature-film movies for over 30 years. Virtually every high-end movie production contains a significant percentage of synthetically-generated content (from Lord of the Rings to Benjamin Button). These results are hard to distinguish from reality and it often goes unnoticed that the content is not real. The novelty and contribution of our work is that we can edit pre-recorded videos in real-time on a commodity PC. Please also note that our efforts include the detection of edits in video footage in order to verify a clip’s authenticity. For additional information, we refer to our project website (see above). Hopefully, you enjoyed watching our video, and we hope to provide a positive takeaway :)


As scientist and layperson of all stripes and belief - we too often forget this important point. Equally important is to realize that frame, metaphor and narrative are also ‘maps that are not the territory’ but that can structure an entailing line of reasoning in powerful and subtle ways. A metaphor enables cross-domain mapping of knowledge - enabling the novel, new, unfamiliar to be grasped with knowledge of older and familiar domains.
Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)
A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.
Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability.  Financial markets have no biological reality to tie them down.

The Map is Not the Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.
  • A.) A map may have a structure similar or dissimilar to the structure of the territory.
  • B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.
  • C.) A map is not the actual territory.
  • D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. Lewis Carroll made that clear by having Mein Herr describe a map with the scale of one mile to one mile. Such a map would not have the problems that maps have, nor would it be helpful in any way.


Here is a 25 min video panel discussion about the future of sensation - well worth the view for anyone interested in how we may augment our senses. In particular the cyborg artist presents himself as definitely an augmented being and has even had his passport picture accepted with an ‘antenna’ protruding from his head. Another artist has an implanted device that sense seismic events in the world - she senses any earthquake as it’s happening.

OUT OF YOUR MIND

Cyborg artist Neil Harbisson, neuroscientist Sheila Nirenberg and Meta's John Werner discuss advances in neuroscience and augmented reality that allowing us to experience the world like never before.


The domestication of DNA has crossed another threshold.
"I think this is going to trigger ‘Sputnik 2.0’, a biomedical duel on progress between China and the United States, which is important since competition usually improves the end product,” he says.

CRISPR gene-editing tested in a person for the first time

The move by Chinese scientists could spark a biomedical duel between China and the United States.
A Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique.

On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu.

Earlier clinical trials using cells edited with a different technique have excited clinicians. The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies.


And here’s another looming future of extending our minds into an ever more diverse ecology of prosthetics.
The new research appears to be the first time wireless brain-control was established to restore walking in an animal. It is part of a campaign by scientists to develop systems that are “fully implantable and invisible” and which could restore volitional movement to paralyzed people, says Bouton.

Brain Control of Paralyzed Limb Lets Monkey Walk Again

A step toward repairing spinal cord injury with electronics.
In a step toward an electronic treatment for paralysis, Swiss scientists say two partly paralyzed monkeys have been able to walk under control of a brain implant.

The studies, carried out at the École Polytechnique Fédérale in Lausanne, Switzerland, successfully created a wireless bridge between the monkeys’ brains and hind limbs, permitting them to advance along a treadmill with a tentative gait.

The research, published today in the journal Nature, brings together several technologies: a brain implant that senses an animal’s intention to walk, electrodes attached to the lower spinal cord that can stimulate walking muscles, and a wireless connection between the two.


Here is a much more innocuous prosthetic - but how long before the camera is an implant?
“We want to empower end users to accomplish these activities of daily living through technology,” says Jon Froehlich at the University of Maryland.

Tiny fingertip camera helps blind people read without braille

No braille? No problem. A new device lets blind people read by popping a miniature camera on their fingertip.

To read printed material, many visually impaired people rely on mobile apps like KNFB Reader that translate text to speech. Snap a picture and the app reads the page aloud. But users sometimes find it difficult to ensure that their photo captures all of the text, and these apps can have trouble parsing a complex layout, such as a newspaper or restaurant menu.

Froehlich and his colleagues have developed a device, nicknamed HandSight,  that uses a tiny camera originally developed for endoscopies. Measuring just one millimetre across, the camera sits on the tip of the finger while the rest of the device clasps onto the finger and wrist. As the user follows a line of text with their finger, a nearby computer reads it out. Audio cues or haptic buzzes help the user make their way through the text, for example changing pitch or gently vibrating to help nudge their finger into the correct position.

In a study published in October, 19 blind people tried out the technology, spending a couple of hours exploring passages from a school textbook and a magazine-style page. On average, they were able to read between 63 and 81 words per minute and only missed a few words in each passage. The average reading speed for an expert braille reader is around 90 to 115 words per minute, while sighted individuals have an average reading speed around 200 words per minute.


This is an interesting development of a type of map. Our dietary-culinary map - especially relevant in the 21st Century cosmopolitan experience where we seek pleasure and health. This is a weak signal of a looming possibility of big data comparing global-individual diets - culinary experience and health - and soon to include genomic data.

Kissing Cuisines: Exploring Worldwide Culinary Habits on the Web

As food becomes an important part of modern life, recipes shared on the web are a great indicator of civilizations and culinary attitudes in different countries. Similarly, ingredients, flavors, and nutrition information are strong signals of the taste preferences of individuals from various parts of the world. Yet, we do not have a thorough understanding of these palate varieties.

In this paper, we present a large-scale study of recipes published on the Web and their content, aiming to understand cuisines and culinary habits around the world. Using a database of more than 157K recipes from over 200 different cuisines, we analyze ingredients, flavors, and nutritional values which distinguish dishes from different regions, and use this knowledge to assess the predictability of recipes from different cuisines. We then use country health statistics to understand the relation between these factors and health indicators of different nations, such as obesity, diabetes, migration, and health expenditure. Our results confirm the strong effects of geographical and cultural similarities on recipes, health indicators, and culinary preferences between countries around the world.


This is a 4 min video that presents how good games enable players to engage in productive struggle.

What Do Games Have to Do with Productive Struggle?

How can productive struggle foster the learning process in students' classroom experiences?
Education researcher and interactive game developer Ki Karou shares a selection of game-based learning strategies that can develop students' capacity for productive struggle. The ultimate goal is to develop our students into curious, tenacious and creative problem solvers.

And another short article about using games to enhance learning.
“It’s not being made to replace the teacher, it’s not being made to replace instruction,” said Geoffrey Suthers, a game designer who was part of the team that developed Project Sampson, which combines math concepts with real-world problem solving as players work to manage the impact of a natural disaster. “It’s a way to have further exposure, and also to give you an idea of how these math concepts are applied.”
“There’s this false dichotomy between fun and work,” she said. “Actually, when you play games, you work hard. Because you’re having fun, you’re willing to do more work. And that’s the point.”

Community College Pilot Project Finds Game-based Learning a Winner in Remedial Math

Players of the video game xPonum have a clear mission: set the trajectory of the laser to hit the row of gems. Collect the gems to clear levels and earn badges—and learn some algebra and trigonometry along the way.

This summer, nine students at the Borough of Manhattan Community College (BMCC) in New York City took part in a pilot program to develop a game-based remedial math curriculum. The course is designed to build algebra skills for science, technology, engineering and math majors, and to prepare students to take a pre-calculus course in the fall.

One of three games developed for this course, xPonum focuses on algebra and trigonometry. The other two—Project Sampson, which combines mapping and geography skills with math content, and Algebots, which turns algebra into a puzzle game—were also played by students throughout the summer course.
Another benefit of game-based learning is that it forces students out of a “procedural mindset,” Offenholley said. In her experience, students like having a set of procedures: memorize the content, pass the test, finish the course. But she thinks more students would be successful in math class—and maybe even enjoy it more—if they not only memorized what to do to solve an equation, but also understood why those steps worked. Games motivate students to dig deeper into the content and to experiment with a subject like math, which can be very dry, in a fun, playful way.


Talking about learning from games - we’ve seen how AI has overcome human masters in Chess and Go - the next frontier is Massive Multiplayer Online Games.
One of the most interesting parts of StarCraft’s “messiness” is its use of incomplete information as a gameplay parameter. Much of each player’s vision is obscured by a ‘fog of war,’ forcing players to predict one another’s decisions while planning their own. That’s a challenge not faced by developers of artificial intelligence for Chess, for instance, where the whole board is visible at once.

Google and Blizzard Will Help Researchers Use Starcraft to Train Artificial Intelligence

Insights from gameplay could help with real-world AI applications.
At this week’s BlizzCon convention in California, game developer Blizzard announced that it would release tools to allow third parties to teach artificial intelligences to play the real-time wargame Starcraft II. The tools are being developed in collaboration with Google’s DeepMind team, and will use the DeepMind platform.

In a blog post accompanying the announcement, the DeepMind team said Starcraft “is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real world.” The game involves interconnected layers of decisions, as players use resources to build infrastructure and assets before engaging in direct combat.

StarCraft’s complexity when compared to Chess or Go, then, makes it closer to the real-world problems faced by computers which do things like plan logistics networks. Those complex systems still present serious challenges for even the most powerful computers, and insights gleaned from StarCraft could help make their solutions faster and more efficient.


Puzzles, AI and Robotics - the progress continues to accelerate - not just chess or go.

Robot 'sets new Rubik's Cube record'

A robot has just set a new record for the fastest-solved Rubik's Cube, according to its makers.
The Sub1 Reloaded robot took just 0.637 seconds to analyse the toy and make 21 moves, so that each of the cube's sides showed a single colour.
That beats a previous record of 0.887 seconds, which was achieved by an earlier version of the same machine using a different processor.
Infineon provided its chip to highlight advancements in self-driving car tech.

At the press of a button, shutters covering the robot's camera sensors were lifted, allowing it to detect how the cube had been scrambled.
It then deduced a solution and transmitted commands to six motor-controlled arms. These held the central square of each of the cube's six faces in place and spun them to solve the puzzle.
All of this was achieved in a fraction of a second, and it was only afterwards that the number of moves could be counted by checking a software readout.


This short article explores the idea of empathy as both learnable and as a choice we are able to make - this may be important when we consider the current perception of social divisiveness in our political landscapes and the consequences of training our warriors to be more ‘effective’ in conflict.
Even those suffering from so-called empathy deficit disorders like psychopathy and narcissism appear to be capable of empathy when they want to feel it.

Empathy Is Actually a Choice

Not only does empathy seem to fail when it is needed most, but it also appears to play favorites. Recent studies have shown that our empathy is dampened or constrained when it comes to people of different races, nationalities or creeds. These results suggest that empathy is a limited resource, like a fossil fuel, which we cannot extend indefinitely or to everyone.

While we concede that the exercise of empathy is, in practice, often far too limited in scope, we dispute the idea that this shortcoming is inherent, a permanent flaw in the emotion itself. Inspired by a competing body of recent research, we believe that empathy is a choice that we make whether to extend ourselves to others. The “limits” to our empathy are merely apparent, and can change, sometimes drastically, depending on what we want to feel.


Here is one signal of the future of ubiquitous sensors and 3D printing. There is a short video that is worth the view.
“One of the interesting things about the design is that much of the complexity is in the design of the body which is 3D printed,” researcher Mark Yim told Digital Trends. “Since the cost of 3D-printed parts are based on the volume of plastic in the part, and independent of complexity, the flyer is very low-cost.”

This Quarter-Sized, Self-Powered Drone Is the Smallest in the World

The Piccolissimo comes in two sizes, a quarter-sized one weighing less than 2.5 grams and a larger, steerable one that's heavier by 2 grams and wider by a centimeter.
According to the researchers, 100 or 1,000 small controllable flyers like the Piccolissimo could cover more of a disaster site than a single large drone and more cheaply.

Piccolissimo is made possible by UPenn’s ModLab, which specializes in “underactuated” robots that can achieve great ranges of motion with the fewest motors possible. The tiny tech only has two parts: the propeller and the body. The motor spins the body 40 times per second in one direction, while the propeller spins 800 times per second the opposite way. The drone can be steered because the propeller is mounted slightly off-center. Changing the propeller speed at precise points during the drone’s flight changes its direction.


The future of death - is an interesting space to explore - the hippies were the vanguard of reclaiming the birthing experience for women and families away from the institutional conveniences of male-centric operation room control. Now the baby-boomers are nearing the next threshold of social rites of passage - how will we be able to ‘own’ our own and our families passage from life? This is a worthwhile read toward new concepts of ancient burial rites.  There is a 27 min video included in the article.
“The choice to have a green burial reflects a deep understanding of our place in the larger ecosystem and the cosmos,” she tells The Creators Project. Astrophysicists, in particular, have a keen grasp of this concept. Neil deGrasse Tyson has said he wants his body to be “buried not cremated, so that the energy contained gets returned to the earth, so the flora and fauna can dine upon it, just as I have dined.”

Flesh-Eating Mushrooms Are the Future of Burial Tech

This NYFW, Ace Hotel New York will showcase garments no one would wear in this lifetime. Natural Causes is an Infinity Burial Suit exhibition co-curated by Coeio, a "green burial" company that created the suit as a radical alternative to traditional funerary practices. The burial suit spawned from the notion that mushrooms could be used to naturally decompose and cleanse toxins from a dead body, an idea manifested by a spore-laden jumpsuit meant to harmoniously eliminate pollutants while nourishing plants in the burial area. The Ace Hotel gallery show kicks off September 8 and runs through the end of the month.

Coeio, founded by MIT-trained artist and inventor Jae Rhim Lee, began as an artistic provocation meant to challenge cultural attitudes towards death, but quickly grew into a product, co-created with fashion designer Daniel Silverstein, with the potential to revolutionize the funeral industry. Typical American burials involve pumping a body full of synthetic fillers and formaldehyde, a carcinogen, to preserve a corpse and make it look alive. Bodies are entombed in caskets varnished with toxic chemicals, and the EPA rates casket manufacturers as one of the worst hazardous waste generators. Cremation is hardly cleaner; every year, 5,000 lbs of mercury are released into the atmosphere from dental fillings.


This may be good news for scientists who feel that their texting is too constrained.

I can haz more science emoji? Host of nerd icons proposed

At a conference in San Francisco, a group drafted proposals to add more planets, instruments and other science icons to the keyboard.
Science lovers, rejoice! More emoji designed for the nerd in all of us are on their way. This weekend, at the first-ever Emojicon in San Francisco, California, a group of science enthusiasts and designers worked on proposals for several new science-themed emoji. If these are approved, in a year or two, people could be expressing themselves with a heart–eye emoji wearing safety goggles.

On 6 November, the science emoji group submitted a formal proposal to the Unicode Consortium, the organization that oversees the official list of these icons, to include emoji for the other planets — aside from Earth — and Pluto. A second proposal, which the team plans to submit in the coming weeks, includes lab equipment (a beaker, Bunsen burner, fire extinguisher, Petri dish and goggles), a DNA double helix, a microbe, a chemical-element tile for helium, a mole (to represent the unit of measure and the animal) and a water molecule.

These would join existing official science-related emoji, such as a microscope, a telescope and a magnifying glass. There’s also an alembic — a piece of equipment used to distil chemicals — along with common plants and charismatic megafauna such as pandas. Unofficial sets of science-themed emoji include caricatures of Albert Einstein.