In the 21st Century curiosity will SKILL the cat.
Major Aspects of the Tactical Battlefield of 2050
The discussion of what major changes we could expect with respect to our ability to see, communicate, think, and decide on the tactical battlefield of 2050 was predicated upon a shared view that this battlefield would be characterized by the vastly increased presence and reliance on: automated processes and decision making; humans with augmented sensing; and information-related and cognitive capabilities. This breakout group posited that transport (getting capability to the battlefield) would not be a limiting consideration. The group identified and discussed the following 7 interrelated future capabilities that they felt would differentiate the battlefield of the future from current capabilities and engagements:
• Augmented humans
• Automated decision making and autonomous processes
• Misinformation as a weapon
• Large-scale self-organization and collective decision making
• Cognitive modeling of the opponent
• Ability to understand and cope in a contested, imperfect, information environment
For each of these developments, the group offered their reasons why they felt that these potential transformative capabilities would be found on the tactical battlefield of 2050. They discussed the ways that adversaries could counter or mitigate the effectiveness of these capabilities as well as how to counter to these counters.
Visualizing the Tactical Ground Battlefield in the Year 2050: Workshop Report
There’s this hopelessly geeky new technology. It’s too hard to understand and use. How could it ever break the mass market? Yet developers are excited, venture capital is pouring in, and industry players are taking note. Something big might be happening.
That is exactly how the Web looked back in 1994 — right before it exploded. Two decades later, it’s beginning to feel like we might be at a similar liminal moment. Our new contender for the Next Big Thing is the blockchain — the baffling yet alluring innovation that underlies the Bitcoin digital currency.
There’s a blockchain for that!
In the past eight years, the share of non-conventional renewable energy (wind, solar and biomass) in Chile’s energy mix has risen from 1% to 11% and continues to grow at the rate of about 600MW per year. The government, which offers long-term contracts to private sector bidders, said it was confident of achieving its target of 20% by 2025. Costs are falling rapidly.
“Renewables are now as competitive as conventional energy, and we don’t have subsidies. Instead, we have a natural subsidy of great photovoltaic potential,” said Paula Estevez, the international affairs chief at the Ministry of Energy, who said one-third of energy projects under construction were solar.
Desert tower raises Chile's solar power ambition to new heights
In the classic economic paper The Nature of the Firm, Nobel economist Ronald Coase wrote about why firms exist. Coase's argument was that firms lower the costs of producing goods and services, because it's easier to coordinate people and projects when everything is done under one roof. But today, communications technology has dramatically reduced the costs of having certain jobs done out-of-house. Will firms live as long as they once did?
It seems not: Richard Foster, a lecturer at the Yale School of Management, has found that the average lifespan of an S&P company dropped from 67 years in the 1920s to 15 years today. Foster also found that on average an S&P company is now being replaced every two weeks, and estimates that 75 percent of the S&P 500 firms will be replaced by new firms by 2027.
Where Do Firms Go When They Die?
Terrence Deacon has written a jaw-dropping book “Incomplete Nature: How Mind Emerges From Matter”. He’s an evolutionary biologist who in this book rewrites what we know about the physics of living systems. This is a 30 min. video that introduces some of the concepts he discusses in his book. The book is dense but a must read for anyone interested in complexity and living systems. How biological ‘functions’ arise because of constraints.
Terrence Deacon: Incomplete Nature, How Mind Emerged from Matter - Sane Society
A radical new explanation of how life and consciousness emerge from physics and chemistry from the UC Berkeley Anthropology Chair, Terrence Deacon.
This next 1 ½ hr video is a great summary of Terrence Deacon’s book “Incomplete Nature” - although long - this is a must view for anyone interested in understanding in scientific terms - the emergence of life through complexity. How an open system (open to incoming energy) enables the conditions of constraints that enable higher orders of order.
Life before genetics: autogenesis, and the outer solar system - Terrence Deacon
The investigation of the origins of life has been hindered by what we think we know about current living organisms. This includes three assumptions about necessary conditions: 1) that it emerged entirely on Earth, 2) that it is dependent on the availability of liquid water, and 3) that it is coextensive with the emergence of molecules able to replicate themselves.
In addition, the three most widely explored alternative general models for a molecular process that could serve as a precursor to life also reflect reductionistically-envisioned fragments of current living systems: e.g. container-first, metabolism-first, or information-first scenarios. Finally, we are hindered by a technical concept of information that is fundamentally incomplete in precisely ways that are critical to characterizing living processes.
These all reflect reductionistic "top-down" approaches to the extent that they begin with a reverse-engineering view of what constitutes a living Earth-organism and explore possible re-compositional scenarios. This is a Frankensteinian enterprise that also begins with assumptions that are highly Earth-life specific and therefore unlikely to lead to a general exo-biology.
The approach Dr. Deacon will outline instead begins from an unstated conundrum about the origins of life. The initial transition to a life-like process necessarily exemplified two almost inconceivably incompatible properties: 1) it must have involved exceedingly simple molecular interactions, and 2) it must have embodied a thermodynamic organization with the unprecedented capacity to locally compensate for spontaneous thermodynamic degradation as well as to stabilize one or more intrinsically self-destroying self-organizing processes.
This talk will explore the origins of life problem by attempting to identify the necessary and sufficient molecular relationships able to embody these two properties. From this perspective Dr. Deacon will develop a model system - autogenesis - that redefines biological information and opens the search for life's origin to cosmic and planetary contexts seldom considered.
This is a great 1hr 16min video by Stuart Kaufman - that everyone pay attention too. A great account of his thinking.
Stuart Kauffman "FROM PHYSICS TO SEMIOTICS"
Seminar in the Department of Semiotics, University of Tartu, Estonia
The seminar was held at the Department of Semiotics of the University of Tartu on April 28, 2012.
To add some spice to Terrence Deacon’s videos here’s a nice short article discussion developments in the thinking of some physicists.
Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas
A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time
There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.
That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.
Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.
Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.
Here’s an interesting paper from the Santa Fe Institute - probably not applicable to government per se - but maybe to individual sub-organizations - e.g. directorates, sub-ministries, etc.
Company mortality: Researchers find patterns in the life and death of firms
It’s a simple enough question: how long does a typical business have to live? Economists have been thinking about that one for decades without a particularly clear answer, but new research by SFI scientists reveals a surprising insight: publicly-traded firms die off at the same rate regardless of their age or economic sector.
Companies come and go for a variety of reasons. Some are bought, some merge with others, and some go out of business completely. There’s no shortage of theories about why.
“The theory of the firm—there are whole books on what people think is going on,” says Marcus Hamilton, an SFI postdoctoral fellow and corresponding author of a new paper published in the journal Royal Society Interface. Despite that, he says, “there is remarkably little quantitative work” on what economists call company mortality, and existing theory and evidence yield contradictory answers. Some researchers think younger companies are more likely to die than older ones, while others think just the opposite.
“We wanted to see if there was any kind of standard behavior or if it was just random,” Hamilton says.
While this Nature article is focused on the science research geared to issues of sustainability - the recommendation suggested is a great idea for all science. The graphic illustrating an ‘evidence map’ is particularly interesting.
Sustainability: Map the evidence
Too many studies go unread. Collate them to enable synthesis and guide decision-making in sustainability
“What if someone had already figured out the answers to the world’s most pressing policy problems, but those solutions were buried deep in a PDF, somewhere nobody will ever read them?” asked a Washington Post blog last year.
It was on to something. Many of the tens of thousands of documents that are produced every year to assess the impacts of sustainability policies and programmes are never read1. In 2014, the World Bank2 found that almost one-third of its archived policy reports — documenting the impacts of its numerous projects, from dam construction to microcrediting — has never been downloaded.
It doesn’t have to be this way. Experts in evidence synthesis, a field that involves the use of various tools and methods to locate and combine many sources of data, are starting to produce evidence maps for wayfaring researchers and policymakers. These pull together and categorize systematic reviews, impact evaluations and other primary-research studies in a particular area (such as agriculture or education), and visually distil the scope and effects of interventions that have been implemented.
Evidence maps can show at a glance which areas or relationships have been studied most — whether it be the impact of ecotourism on local economies or of education on reducing harmful fishing practices. They can also highlight key gaps in the evidence base, and so guide the prioritization of research.
Here’s something that we should think very deeply about - what sorts of policy, governance, institutions do we need for the future of the digital environment and the inevitable ‘gamification’ of social currency.
China Just Launched the Most Frightening Game Ever — and Soon It Will Be Mandatory
As if further proof were needed Orwell’s dystopia is now upon us, China has now gamified obedience to the State. Though that is every bit as creepily terrifying as it sounds, citizens may still choose whether or not they wish to opt-in — that is, until the program becomes compulsory in 2020. “Going under the innocuous name of ‘Sesame Credit,’ China has created a score for how good a citizen you are,” explains Extra Credits’ video about the program. “The owners of China’s largest social networks have partnered with the government to create something akin to the U.S. credit score — but, instead of measuring how regularly you pay your bills, it measures how obediently you follow the party line.”
In the works for years, China’s ‘social credit system’ aims to create a docile, compliant citizenry who are fiscally and morally responsible by employing a game-like format to create self-imposed, group social control. In other words, China gamified peer pressure to control its citizenry; and, though the scheme hasn’t been fully implemented yet, it’s already working — insidiously well.
Zheping Huang, a reporter for Quartz, chronicled his own experience with the social control tool in October, saying that “in the past few weeks I began to notice a mysterious new trend. Numbers were popping up on my social media feeds as my friends and strangers on Weibo [the Chinese equivalent to Twitter] and WeChat began to share their ‘Sesame Credit scores.’ The score is created by Ant Financial, an Alibaba-affiliated company that also runs Alipay, China’s popular third-party payment app with over 350 million users. Ant Financial claims that it evaluates one’s purchasing and spending habits in order to derive a figure that shows how creditworthy someone is.”
However, according to a translation of the “Planning Outline for the Construction of a Social Credit System,” posted online by Oxford University’s China expert, Rogier Creemers, it’s nightmarishly clear the program is far more than just a credit-tracking method. As he described it, “The government wants to build a platform that leverages things like big data, mobile internet, and cloud computing to measure and evaluate different levels of people’s lives in order to create a gamified nudging for people to behave better.”
While Sesame Credit’s roll-out in January has been downplayed by many, the American Civil Liberties Union, among others, urges caution, saying:
“The system is run by two companies, Alibaba and Tencent, which run all the social networks in China and therefore have access to a vast amount of data about people’s social ties and activities and what they say. In addition to measuring your ability to pay, as in the United States, the scores serve as a measure of political compliance. Among the things that will hurt a citizen’s score are posting political opinions without prior permission, or posting information that the regime does not like, such as about the Tiananmen Square massacre that the government carried out to hold on to power, or the Shanghai stock market collapse. It will hurt your score not only if you do these things, but if any of your friends do them.” And, in what appears likely the goal of the entire program, added, “Imagine the social pressure against disobedience or dissent that this will create.”
This is an excellent summary of a number of key researchers in the field of Deep Learning..
HERE'S WHAT WE CAN EXPECT FROM DEEP LEARNING IN 2016 AND BEYOND
As 2015 draws to a close, all eyes are on the year's accomplishments, as well as forecasting technology trends of 2016 and beyond. One particular field that has frequently been in the spotlight during the last year is deep learning, an increasingly popular branch of machine learning, which looks to continue to advance further and infiltrate into an increasing number of industries and sectors.
Over the last year we've had the privilege of hearing from many of the great minds working in artificial intelligence and computer science, at RE•WORK events, and we look forward to meeting and learning from many more in 2016!
As part of our ongoing speaker Q&A series, we asked some of the top names in deep learning for their predictions for the field over the next 5 years.
Collective Human-Algorithmic Intelligence CHAI (you heard it here first. :)
The best chess players today are not computers, algorithms, humans - but rather teams of humans with computers and algorithms. This is an approach that should be involved in more discussions of the human vs AI discussion.
AI Machine Learns to Drive Using Crowdteaching
Some driving tasks are trivial for humans but hard for machines. Now researchers have developed a way for AI machines to learn from the crowd.
This has been the year of the AI machine, and it’s been a rapid change. Artificial intelligence has suddenly begun to match and even outperform humans in tasks where we’ve have always held the upper hand—face recognition, object recognition, language understanding and so on.
And yet there are plenty of complex tasks in which AI machines still trail humans. These range from simple housework such as ironing to more advanced tasks such as driving. The reason for the slow progress in these areas is not that intelligent machines can’t do these tasks. Far from it. It’s because nobody has worked out how to train them.
The huge progress in face recognition, for example, has come about in large part because of massive databases of images in which human annotators have clearly marked faces in advance. AI algorithms have used these databases to learn.
But nobody has been able to create similar databases for more complex tasks such as driving. The absence of such databases is one of the main reasons for the lack of progress in this area.
Today, that looks set to change thanks to the work of Pranav Rajpurkar and pals at Stanford University in California. These guys have developed a way of creating large annotated databases for exactly these kinds of difficult tasks. And they’ve used the database to teach an AI machine some of the important driving skills that humans take for granted.
Their approach is simple in essence. The basic idea is to make it easy for human annotators to add information to the database and then to evaluate it. Rajpurkar and co do this by turning the process of data entry into a driving game that runs on a Web browser.
The Stanford team start by creating a database of road conditions by driving their own research vehicle along California’s highways. This gathers GPS data, visual data, laser scanning data, and so on.
They then process this data to generate a virtual 3-D environment. The goal is for an AI algorithm—a convolutional neural network called Driverseat—to evaluate this environment and determine things like the positions of other vehicles, the lane it is driving in, off-ramps, on-ramps, and so on. And to do this in a wide variety of driving conditions.
Here’s a great article on a first person professorial account of the continuing evolution of the Massive Open Online Course - MOOC - this is worth the read.
MITx u.lab: Education As Activating Social Fields
Until last year, the number of students in my classes at MIT numbered 50 or so. Less than twelve months later, I have just completed my first class with 50,000 registered participants. They came from 185 countries, and together they co-generated:
• >400 prototype (action learning) initiatives
• >560 self-organized hubs in a vibrant global eco-system
• 1,000 self-organized coaching circles.
What explains the growth in group size from 50 to 50,000? It's moving my class at MIT Sloan to the edX platform, making it a MOOC (Massive Open Online Course).
Designed to blend open access with deep learning, the u.lab was first launched in early 2015 with 26,000 registered participants. When we offered it for a second time, in September, we had 50,000 registered participants. According to the exit survey, 93% found their experience "inspiring" (60%) or "life changing" (33%); and 62% of those who came into the u.lab without any contemplative practice have one now.
One-third of the participants had "life changing" experiences? How is that possible in a mere seven-week online course? The answer is: it's not. The u.lab isn't just an online course. It's an o2o (online-to-offline) blended learning environment that provides participants with quality spaces for reflection, dialogue, and collaborative action.
From the perspective of the course co-facilitation team, the whole u.lab experience felt like a journey of profound personal, relational, and institutional inversion. To invert something means to turn it inside-out or outside-in. In the case of the u.lab, not only was the classroom experience inverted, but so was the conversation among learners and the learners' cognitive experience.
This is a 50 min video about marketing - this may seem like a strange article for this list - but it presents myth-busting ideas about why people buy and what marketers should focus on. The difference is the narrative vs the brand and the need to train marketers in science.
How Brands Grow - Knowledge Works lecture 2010 at UniSA
Professor Sharp's presentation "How Brands Grow" draws on years of research and marketing knowledge to answer important questions, and dispel common misconceptions about brand growth, competition, loyalty, advertising and price promotions.
This is a short article announcing an initiative to build an Internet-of-Things IoT platform.
Thinfilm Receives Funding to Help Create Open-Source Internet of Things Platform
Thin Film Electronics ASA announced that the Company has been awarded a grant from the European Commission as part of its Horizon 2020 research and innovation initiative. The grant will fund the “TagItSmart” project, through which Thinfilm will partner with global technology, consumer packaged goods (CPG), and smart-products leaders to create the world’s first “Internet of Things” (IoT) platform featuring open-source, open API (Application Programming Interface) architecture. For its part in the project, Thinfilm will receive EUR 472,312 (approximately USD 511,000) over three years.
The focus of TagItSmart will be to create a global-scale IoT platform – built using open-source architecture – to support trillions of intelligent items and the data they generate. The platform will provide compelling functionality and full interoperability in order to seamlessly integrate with a vast array of IoT-centric tools, technologies, and software. Ultimately, TagItSmart will help organizations effectively address challenges regarding the management of IoT products and related services as they seek to capitalize on the growing “sensorization” of objects. TagItSmart will deliver a range of functional capabilities, including 1) the creation of smart markers using NFC or functional codes, 2) the secure acquisition and consumption of contextual data, and 3) the efficient creation and deployment of IoT-based services. To boost platform adoption, a set of industrial use cases will be identified and demonstrated.
Thinfilm’s NFC OpenSense™ technology will be featured as a key component of the TagItSmart platform, and end-users will be able to access several related use cases that highlight commercial deployment of NFC OpenSense in market.
Here’s a breakthrough on the road to a new computing paradigm - may approaching faster than we imagine.
Engineers demo first processor that uses light for ultrafast communications
Engineers have successfully married electrons and photons within a single-chip microprocessor, a landmark development that opens the door to ultrafast, low-power data crunching.
The researchers packed two processor cores with more than 70 million transistors and 850 photonic components onto a 3-by-6-millimeter chip. They fabricated the microprocessor in a foundry that mass-produces high-performance computer chips, proving that their design can be easily and quickly scaled up for commercial production.
The new chip, described in a paper to be published Dec. 24 in the print issue of the journal Nature, marks the next step in the evolution of fiber optic communication technology by integrating into a microprocessor the photonic interconnects, or inputs and outputs (I/O), needed to talk to other chips.
"This is a milestone. It's the first processor that can use light to communicate with the external world," said Vladimir Stojanović, an associate professor of electrical engineering and computer sciences at the University of California, Berkeley, who led the development of the chip. "No other processor has the photonic I/O in the chip."
The achievement opens the door to a new era of bandwidth-hungry applications. One near-term application for this technology is to make data centers more green. According to the Natural Resources Defense Council, data centers consumed about 91 billion kilowatt-hours of electricity in 2013, about 2 percent of the total electricity consumed in the United States, and the appetite for power is growing exponentially.
Here’s a great example of the inevitable trajectory of Internet access as a public utility or infrastructure.
New York is finally installing its promised public gigabit Wi-Fi
Today, workers began installing the first LinkNYC access points in New York. First announced in November 2014, the hubs are designed as an update to the standard phone booth, using upgraded infrastructure to provide gigabit Wi-Fi access points. This particular installation was spotted outside a small Starbucks at 15th St and 3rd Avenue, near Manhattan’s Union Square. 500 other hubs are set to be installed throughout the city by mid-July. LinkNYC anticipates one or two weeks of testing before New Yorkers will be able to use the hubs to get online.
The full network will install more than 7,500 public hubs throughout the city, each replacing a pre-existing phone booth. Once completed, the hubs will also include USB device charging ports, touchscreen web browsing, and two 55-inch advertising displays. The city estimates that ads served by the new hubs will generate more than $500 million in revenue over the next 12 years.
Emerging from the Reinvent Payphones design challenge under Mayor Bloomberg, the LinkNYC project has been the subject of significant controversy in recent months. Shortly after the initial buildout was announced, the Daily News reported that outer-borough hubs in Brooklyn and the Bronx were exhibiting speeds as much as ten times slower than equivalent hubs in Manhattan. One of the companies involved in the hubs, Titan, also drew controversy for implanting Bluetooth beacons in the test hubs, which could potentially have been used to track pedestrians and serve ads. The beacons were removed shortly after their existence was made public. This summer, Titan merged with Control Group to form a new company called Intersection, and Google's Sidewalk Labs purchased a non-controlling portion of the subsequent company.
When the project was announced in 2014, LinkNYC said it would begin construction "next year." This week's construction push allowed them make good on that promise just a few days before 2016. Other functionality may take longer to come online, particularly the built-in touchscreen-enabled tablet, designed for web browsing, maps, and free phone calls. On an accompanying pamphlet, those features are listed as "coming soon."
This is not just about the looming paradigm change in transportation - but will accelerate the huge change in energy-driven geo-politics.
China Orders $12 Billion in Electric Cars from Former Saab Outfit
China's vehicle leasing company Panda New Energy has placed an order for electric cars worth $12 billion from Chinese-owned carmaker Nevs, formerly Saab, the Sweden-based company announced on Thursday.
It was the second order this autumn for National Electric Vehicle Sweden (Nevs), which has faced major financial woes since being established in June 2012 to take over Saab's assets after bankruptcy.
"This is a strategic collaboration for Nevs not only in terms of the numbers of vehicles, but it is also an important step to implement our vision and new business plan," Nevs vice chairman Stefan Tilk said in a statement.
Nevs, whose core strategy is to produce 100-percent electric vehicles for the Chinese market, has only produced cars during a brief period from late 2013 to May 2014.
And more one the emerging self-driving transportation
Ford and Google Could Be Making the Model T of Automated Driving
A deal between the mass-market automaker and the company furthest ahead in automated driving would accelerate adoption.
In what could turn out to be a very important partnership for the future of the car, Ford and Google are reportedly planning a joint venture to develop automated driving technology. According to Yahoo Motors, the companies will announce the venture at the Consumer Electronics Show, in Las Vegas, next month.
If so, it would be a major moment in the history of automated driving. While most car makers are working on such technology, so far only Tesla has released a vehicle capable of advanced automation—and then only in vehicles that cost close to $100,000 apiece. Other cars that feature some automated driving capabilities are similarly priced.
Ford’s cars are much less expensive than Tesla’s, and so the new venture might lead to more reasonably priced self-driving vehicles in the near future. It might also help drive down the cost of the sensors and other systems required for self-driving. The venture could also help Google gain a foothold in the auto industry without needing to get into manufacturing cars itself.
The deal would certainly fit with Ford’s ambitious efforts to reinvent itself as a more tech-savvy, future-focused company. With research suggesting that the software and connectivity found in cars is becoming as important as the powertrain or design, and that fewer people are interested in owning a vehicle rather than sharing one, Ford has begun rethinking its business through an effort called Smart Mobility.
While this is talking about the ‘conflict space’ of 2050 - it should be also considered as relevant to the ‘business space’ as well.
Visualizing the Tactical Ground Battlefield in the Year 2050: Workshop Report
This report describes the proceedings and outcomes of a workshop that brought together a diverse group of intellectual leaders to envision the future of the tactical ground battlefield. This workshop, organized by the University of Maryland (UMD) on behalf of the US Army Research Laboratory (ARL)/Army Research Office (ARO), took place on March 10–11, 2015, at the College Park Marriott Hotel & Conference Center located near the UMD campus in East Hyattsville, Maryland. This workshop focused its attention on the impact that information technologies (broadly understood) would have on tactical ground warfighting circa 2050. In describing the nature of the workshop to participants, this workshop was alternatively described as “Future Cyber Warfighting,” and “Information Technologies and Ground Warfighting.”
The dominant technologically driven changes, including those of warfighting, of the last few decades have had much to do with the technologies and concepts that are associated with the Information Age. Therefore, it could be assumed that the continuing evolution of information technologies (and possibly revolutionary changes) will continue to be one of the significant forcing functions that will shape related warfighting technologies and capabilities between now and 2050. For the purposes of this workshop, information technologies include robotics, smart munitions, ubiquitous sensing, and extreme networking, along with the potentially massive impact of cyber warfare. The workshop critically examined this “Information Age” assumption and its implications.
We recognize that information-related technologies will continue to advance between now and 2050, and that these advances and their commercialization will change the economics of communications and information and, thus, change warfare. As a result of these changes, the roles of information technologies will coevolve (i.e., will influence and be influenced by) with future concepts and technologies for key warfighting functions, including seeing (sensing), understanding, communicating, moving, and applying kinetic and non-kinetic effects. Further, that these developments will spawn a cascade of countermeasures and counter-countermeasures; the net result will be what the future Soldier will see and experience on the tactical battlefield. Therefore, it is apparent that one cannot correctly visualize the future battlefield by focusing on the evolution of information technologies alone. Thus, to avoid a vision that incorporates a mismatch between 2050 information technology and warfighting tools and techniques of 2015, workshop participants were asked to simultaneously explore future visions of both the informational and physical aspects of the battlefield.
The idea of creating dynamic visualizations is a powerful one to consider - the acceleration pace in which we are inundated with information - make it increasingly difficult to keep up and to integrate what we read, access and know. Here’s one beautiful demonstration of how the digital environment can enable more immersive and interactive media content. Click on the demo - and just scroll down and watch the map.
Animated Map Path for Interactive Storytelling
An interactive journey where a Canvas map path is animated according to the content that is scrolled on the page.
Today we’d like to share an experimental demo with you. This demo is an interactive map that will animate a map path while scrolling the page. The main idea is to connect the story being told with the path itself. The journey can also contain images that will indicate where they have been taken with a semi-transparent cone. It will appear as soon as the image is in the visible viewport. A little plane icon is used in the last bit of the path in order to indicate that the journey was made on an aircraft.
Here’s something that may be coming to the ‘Brita’ or Brita-clone type water filter near you soon.
A faster, cheaper water filter, thanks to sugar
A new filter material may be better at straining contaminants from water than the activated carbon in your faucet filter—and may be cheaper and easier to clean, to boot. If it can be developed into a successful technology, the new material might help remove from the water supply small organic molecules such as Bisphenol A (BPA), a byproduct of some plastic manufacturing that has been linked to environmental damage and health risks.
“This was pretty exciting,” says Susan Richardson, an environmental chemist at the University of South Carolina in Columbia, who was not involved in the study. “It looks very promising. I can’t see a downside yet.”
The researchers estimate that the manufacturing costs may drop as low as $5-$25 per kilogram of β-cyclodextrin as chemists continue to refine the polymerization process, about half the cost of charcoal filters. “I’m thinking this is going to be something like 100 times the cost of activated carbon,” Richardson says. “But it’s cheaper, apparently, so go figure.”
Dichtel is hopeful that the new material could eventually be useful not just in commercial water filters, but industrial ones as well, especially in the developing world. The technology could also be a boon to wildlife: BPA mimics the hormone estrogen and has been wreaking havoc on fish populations for years now, causing male fish to become female in some cases. “We’ve got a big problem with estrogen of fish in our rivers,” Richardson says. “If this works and it’s cheaper than activated carbon, I could see it having promise for reducing the impact of the estrogenicity and helping our fish recover, too.”
This is an interesting article discussing the use of IBM’s Watson to help in a psychological analysis of people’s writing. This is an interesting trajectory to track - our digital trails may reveal a lot more than we think - especially with the accelerating pace of AI. I submitted a couple of blog posts and an longish email to a friend - with very interesting results.
I Asked A Computer To Be My Life Coach
IBM's Watson analyzes a Twitter account of an unnamed user, breaking down needs, values and five personality traits: openness, conscientiousness, extroversion, agreeableness, neuroticism
The words you use betray who you are.
Linguists and psychologists have long been studying this phenomenon. A few decades ago they had a hunch that the number of active verbs in your sentences or which adjectives you use (lovely, sweet, angry) reflect personality traits.
They have painstakingly pinpointed various insights. For example, suicidal poets, in their published works, use more first-person singular words (like "me" or "my") and death-related words than poets who aren't suicidal. People in positions of power are more likely to make statements that involve others ("we," "us"), while lower-status people often use language that's more self-focused and ask more questions. Comparing genders, women tend to use more words related to psychological and social processes, while men referred more to impersonal topics and objects' properties.
This research suggests that Internet companies such as Facebook and Google, with their troves of written expressions, are sitting on powerful insights about us as people. But if you ask them, "Hey, can you give me the take on me that you've got in-house or that you've built for advertisers, with my anonymized data?" — they won't give it to you. I actually did ask, and they don't have that kind of offering.
But I've found someone who does: IBM's Watson division. Researchers there have taken the personality dictionaries already created by scientists, dropped them into Watson (the computer that won Jeopardy!), and sent it off to apply it to people on Twitter, Facebook, blogs. That forms a digital population of people and personality types. Over time, more text from more people will help Watson get smarter. (Yes, this is machine learning.)
In its own studies, IBM found that characteristics derived from people's writings can reliably predict some of their real-world behaviors. For instance, people who are less neurotic and more open to experiences are more likely to click on an ad, while people who score high on self-enhancement (meaning, seek personal success) like to read articles about work.
You can try it yourself at Watson’s IBM Personality Insights site here:
Here’s another article about computer-based psych assessments.
Computer-based personality judgments are more accurate than those made by humans
This study compares the accuracy of personality judgment—a ubiquitous and important social-cognitive activity—between computer models and humans. Using several criteria, we show that computers’ judgments of people’s personalities based on their digital footprints are more accurate and valid than judgments made by their close others or acquaintances (friends, family, spouse, colleagues, etc.). Our findings highlight that people’s personalities can be predicted automatically and without involving human social-cognitive skills.
Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.
This is another of the rapidly emerging developments in the shift in energy paradigm.
Desert tower raises Chile's solar power ambition to new heights
Towering 200 metres above the desert, the Atacama 1 will harvest the sun’s energy from a surrounding field of giant mirrors. But the completion of the $1.1bn project, the first of its kind in Latin America, has been thrown into doubt by the financial difficulties of its Spanish owner
Rising more than 200 metres above the vast, deserted plains of the Atacama desert, the second tallest building in Chile sits in such a remote location that it looks, from a distance, like the sanctuary of a reclusive prophet, a temple to ancient gods or the giant folly of a wealthy eccentric.
Instead, this extraordinary structure is a solar power tower that is being built to harvest the energy of the sun via a growing field of giant mirrors that radiate out for more than a kilometre across the ground below with a geometric precision that is reminiscent of contemporary art or the stone circles of the druids.
Still under construction, the Atacama 1 Concentrated Solar Power plant is a symbol of the shift from dirty fossil fuels to a cleaner, smarter way to generate electricity in Chile which is leading the charge for solar in Latin America thanks to its expanses of wilderness and some of the most intense sunlight on Earth.
“In solar alone, we have 1,000 gigawatts of generation potential, but domestic demand is less than 20GW. In future, we could export energy to other Latin American countries,” said Patricio Rodrigo, the executive director of the Chile Ambiente corporation.
Think of generating hydrogen as a battery - or excess energy to desalinate water.