In September of 2013, the University of Baltimore launched a university-wide initiative to answer the question:
“What does it mean to be living, working, playing, creating, and learning—and thriving—in this new digital world?”
Stephen Wolfram, leader of the company behind Wolfram Alpha and Mathematica has just announced the immanent launch of Wolfram Language, an incredibly powerful interactive programming language with built in knowledge of the world (eg. it knows things like state capitols and English grammar) and a huge set of functions that allow people using the language to manipulate that data, bring new data into a program with just a command (eg. there’s a function that “scrapes” web sites and imports all the links on the site), and manipulate it symbolically so that it can be used throughout the programs they write. It looks pretty easy to learn, too. What all this really means is that non-techie types will have the ability to extract data from the internet, manipulate that data, use that data to make decisions, and then output the result of those decisions in a whole range of ways that include graphs, charts, geo-plots, 3D graphics, etc.. And because Wolfram language also comes with built-in “hooks” to talk to a wide variety of devices, you can even use it to trigger devices outside of your computer.
The video above (narrated by Wolfram himself) does a much better job explaining many of the language’s capabilities, but once you watch it (and if Wolfram Language lives up to what he promises), it may open up a huge flood of innovation and creativity from people who never thought they’d be able to use a programming language because so much of the stuff that takes lines and lines of code in other languages is reduced to a command or two in Wolfram Language. It should be pretty interesting to see what people do with this new tool!
Historian Michelle Moravec recently posted a fantastic article on the Chronicle of Higher Education’s ProfHacker blog where she dishes up some pretty fantastic advice for scholars looking for new digital tools to help them in their work. She recounts the lessons she learned while working on her Visualizing Schneemann project, a work that uses different digital network analysis methods to analyze the correspondence of feminist artist Carolee Schneemann in order to get a better picture of her Schneemann’s social networks, influences, travels, etc. The end result is an amazing collection of graphs that really help viewers to better understand the artist.
You can (and should) read the article for yourself, but I wanted to highlight a few of the digital tools Moavec used in her research. Not only are these free, but they could be a huge help to anyone interested in data visualization, digital textual analysis, or even just folks who want to actually “see” their own social networks in a new way. Happy hacking!
Gephi: A free, open source, high-powered tool for working with network graphs. If you don’t know anything about network theory, don’t panic: there are a number of tutorials that walk you through the basics of creating your own graphs. It even has a built-in tool that allows you to analyze your email account so that you see a visual representation of who you correspond with and their relationships to each other.
Raw: Raw is a much more general-purpose data visualization tool than Gephi and a heck of a lot easier to use for novices. All you need to do is cut and paste your data into the Raw web site from a source like Excel or even a flat-file database, set a few parameters, and poof! you’ve got a really snazzy looking graph in vector format (meaning you can easily resize it without getting all bit-mappy) that’ll impress anyone who sees it next time you have to do a presentation or turn in a paper.
StanfordNER: This one’s a little bit more obscure and requires a bit more technical knowledge to set up, but if you’ve ever wished that you had a tool that was able to automatically identify and extract “entities” such as names, places, organization names, etc. from big chunks of text, the Stanford Named Entity Recognizer uses some pretty fancy natural language processing techniques to do just that.
TimeMapper: Probably the coolest tool on the list (though I have to admit that I kinda have a soft spot for timelines), TimeMapper allows you to create really cool map/timeline mashups for free. All you have to do is fill out a Google spreadsheet (provided) with information such as the name of your timeline entry, dates, description, and place and TimeMapper takes your info and creates a custom timeline tied to a custom Google map so that you can visualize your information not only in time (though the timeline) but in space as well (on the Google map). The Google spreadsheet at the heart of TimeMapper even automatically converts location names to latitude and longitude coordinates! You can even spiff up your timelines with images and links for additional information.
Polarized Twitter network. Click the image to see more detail.
Edit: cool new study…and a bonus tool! Not long after I posted this I happened to run across this article on mapping Twitter Topic Networks on the Pew Internet and American Life site. It’s a really fascinating piece that identifies 6 distinct patterns of behavior on Twitter: Polarized Crowds (see image to the right) where groups on either side of an issue talk to each other but interact little with their opponents; Tight Crowds of “highly interconnected people” with a few others from outside the main group; Brand Clusters where well known products, services, or celebrities form a hub that attracts large numbers of people from all over the Twittersphere; Community Clusters that form a number of smaller groups with their own individual characteristics; Broadcast Networks mainly consisting of people retweeting news from major news sources; and Support Networks that form around major brands that use Twitter for customer support. The article also includes a fantastic Method section that describes in detail how the researchers used the free NodeXL Excel Template to collect and visualize the data in their study. NodeXL can do some pretty cool stuff: check out the NodeXL Graph Gallery for tons of examples.
Reserve your free tickets now for NET/WORK Baltimore on Thursday, February 20th. If their last event (Technical.ly Philadelphia, shown here) is any indication, it looks like it’ll be a great event for job-seekers!
Baltimore ties for 5th in the US for highest percentage of smartphone users.
If you needed another reason to feel smart for living in and around Baltimore (and for going to UB!), here’s one: Baltimore ranks #5 among US markets for smartphone users. According to a recent survey by Nielsen (famous for tracking TV and radio usage), 72% of Baltimore-region mobile phone users have smartphones, slightly above Chicago,IL’s 71% and Miami, FL’s 73%. The top smartphone-using city in the US was Dallas, TX, with 76% of mobile users sportin’ smartphones. On average, 67% of US mobile subscribers use smartphones.
New API allows digital content creators to incorporate elements of the Walters’ collection of 10,000 objects into their own programs
The Walters Art Museum in Baltimore just announced the public release of an API (Application Programming Interface for you non-coders out there) that allows creatively-minded programmers and artists to access their collections via a wide range of parameters including items from the collection, their location in the Museum, specific exhibitions, geography or origin, and many other parameters (name, keyword, catalog ID, etc.).
“So what?” you might be asking. Fair question. Basically this new API allows people to build custom web-based apps (or, with a language like Processing, even stand-alone apps) that draw their content from the collection or allow users to make customized searches through the Walters’ huge collection of 10,000+ objects that literally spans thousands of years of human history. Rather than have to “scrape” content off of the Walters’ site and risk broken links and difficult-to-manage code, people who want to incorporate art from the collection can query the database directly. Better still, everything’s available under a public license that merely requires those who tap into the database to credit the Museum.
Should be pretty interesting to see what people come up with!
A new article in The Economist takes a hard look at how technology-driven automation will impact jobs in the next 20 years. Their conclusion? Just as the Industrial Revolution “swept aside” the livelihoods of most of those who made their living as craftspeople who made things by hand, the Digital Revolution will eliminate ever-increasing numbers of jobs considered “white collar” (and automation-proof) today. The result will be increasing income inequality and rising unemployment. The disruptions of recent years provide a model for where things may be headed in the future:
[We're looking at] history repeating itself. In the early part of the Industrial Revolution the rewards of increasing productivity went disproportionately to capital; later on, labour reaped most of the benefits. The pattern today is similar. The prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers. Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from around 9% in the 1970s to 22% today. Unemployment is at alarming levels in much of the rich world, and not just for cyclical reasons. In 2000, 65% of working-age Americans were in work; since then the proportion has fallen, during good years as well as bad, to the current level of 59%.
So what can be done? The answer, according to The Economist, is (in large part), education. The education system itself needs to be overhauled:
…schools themselves need to be changed, to foster the creativity that humans will need to set them apart from computers. There should be less rote-learning and more critical thinking.
And what if we don’t change things? The lessons of the past tell us that maintaining the status quo will lead to a world of even greater disruption. Only by beginning to change today will we be able to head off the inevitable revolution(s) of tomorrow:
Innovation has brought great benefits to humanity. Nobody in their right mind would want to return to the world of handloom weavers. But the benefits of technological progress are unevenly distributed, especially in the early stages of each new wave, and it is up to governments to spread them. In the 19th century it took the threat of revolution to bring about progressive reforms. Today’s governments would do well to start making the changes needed before their people get angry.
B.O.B. (1983) and MindLink (1984) at the Consumer Electronics Show
Here are two pictures from the Consumer Electronics Show taken in 1983 and 1984…31 and 30 years ago, respectively. The picture on the left is of B.O.B., short for “Brains On Board” (clever, huh?), billed as a “personal robot assistant” from creator Androbot, one of the first companies funded by Atari creator Nolan Bushnell‘s Catalyst Technologies Venture Capital Group (one of the first venture firms to focus on high tech in the way we think of it today). B.O.B. was designed to be the smarter younger brother of the earlier Topo robot. Topo was sold as “a mobile extension of your personal computer,” and (no surprise) required an external computer to do the heavy-duty processing it needed to roll around. Topo sported some pretty impressive specs for the time: 3 8031 microprocessors, wireless communication to your home PC via infrared link, 5 slots for additional sensors, and even limited speech capability.
Unfortunately as a “personal assistant” Topo was pretty lame. He really didn’t have much in the way of any sort of attached manipulator (arm, hand, etc.) and didn’t come standard with any sensors, so he really couldn’t do more than roll around on his two angled wheels (check out Topo’s bottom in the picture above) while being remotely controlled by a person sitting at their Apple ][+ personal computer. Topo might have made for a cool party trick but he was a long way from The Jetsons Rosie.
Rosie the Robot from The Jetsons
The right half of the CES picture at the top of this post shows a woman using MindLink, a never-released controller for the wildly popular Atari 2600 video game console system. Billed as a controller that let you use your mind to control the action on the screen, in reality MindLink used sensors to read muscle movements in the player’s forehead. Unfortunately a gaming system basically controlled by one’s eyebrows never really took off because early testers complained of headaches induced by forehead muscle strain. However, according to this article from the Atari Museum site the headband controller could be strapped to a “bicep or thigh” and users could be trained to use those muscles to control simple Pong-like games. Fun, huh?
MindLinkin’ it solo, 1984 style
So where have we gotten in 30 years? Where are the robots we were promised? Why aren’t we “jacking in,” using our minds to control our computers rather than typing on keyboards that haven’t changed much since the first “dumb” CRT terminal hit the market back in the 1970′s?
Land-based robots have also come a long way. Lead by innovative firms such as Boston Dynamics (recently acquired by Google), the new batch or autonomous ground-based robots are a far cry from ol’ B.O.B. of 30 years ago.
Boston Dynamics’ “WildCat”
Boston Dynamics’ “Petman”
Besides the creepy “uncanny valley” feeling that comes over many of us when we see these increasingly life-like robots, one of the most interesting developments has been the move in robot development away from the idea of the general purpose, human-like “mechanical men” of the past which strove to replicate humanity to today’s development of specialized robots designed to do things that people can’t do in forms that bear little resemblance to humans. Sure, ASIMO might resemble Verne Troyer in a space suit, but it’s pretty clear from its leg joints that we’re not looking at a diminutive person. And yes, Boston Dynamics’ “Petman” does creepily resemble a soldier in full CBW (Chemical and Biological Weapons) gear, but that’s because it’s been designed to test clothing designed for humans…not because Boston Dynamics was trying to replicate a “man without a heart” like the TinMan from the Wizard of Oz. Today, form arises from function: floor sweeping robots are flat discs because they work better that way: Rosie the Robot of The Jetsons would have just as hard a time vacuuming under furniture as the humans she was modeled on.
In many ways the development of robotics over the centuries from concept to mechanical automaton to the drones of today tells us a lot about the desires and aspirations of humanity. Homer called them “Golden Servants,” robots better than humans, forged by the gods to serve the gods. Golems of the early Talmud could only be created by those closest to God.[editor's note: added after the first comment] Early Chinese artisans developed complex, human-like automata designed to mimic humans by being better at humans, even when playing as an orchestra. Leonardo used his knowledge of mechanics and human anatomy to develop robot knights powered by linkages modeled on human tendons and muscles. Artisans of the 16th and 17th centuries carried on Leonardo’s fascination with humanity and the power of science and technology to build increasingly complex “model humans” (or “androids” as German alchemist Albertus Magnus dubbed them later in 1727) such as the “mechanical monk” built around 1560. Automata of the 18th century reflected the Humanist spirit by attempting to re-create artificial people who could write, draw, sing, and play music. The Industrial Age brought about its own mechanical marvels designed to do work better and faster and cheaper and more productive than the troublesome humans toiling in factories or unreliable human “calculators” churning out tables of numbers by hand. As the 20th century dawned and the first World War demonstrated what happens when mechanization is applied to destruction, robots (mostly fanciful creations in theater or film) served as reminders about what happens when humanity is removed from life. Azimov’s “Three Rules of Robotics,” first published in 1940 as the Nazis began to grind up Europe, placed on robots a code of ideals designed to eliminate the chance of robot/human conflict:” 1) a robot may not injure a human being, or, through inaction, allow a human being to come to harm; 2) a robot must obey orders given it by human beings except where such orders would conflict with the First Law; and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second law.”
From Metropolis, 1927
The postwar conception of robots (up until the late 1960′s) were all about optimism, triumphant science, and the ideal of never-ending, universal prosperity. After the horrors of World War II, we wanted to create human-like things that could do the things that humans shouldn’t have to do (cue Rosie the Robot again, always eager to do the dirty housework that Jane Jetson never wanted to do). Science –which had created the atomic bomb, conquered the sea and the skies, and had begun to conquer even Heaven (space) itself– seemed inexorable and omnipotent. When computer scientist Alan Turing defined what was later to be called “artificial intelligence” through the Turing Test (a test that can only be conquered by a computer becoming indistinguishable from a human), the idea that science could create a sentient being wasn’t all that far off. When George Devol and Joseph F. Engleberger met over martinis and decided to form the first robot company (Unimation, which created the first industrial robot, Unimate), they were motivated by a desire to free humans from the drudgery of “putting and taking,” tasks that made up 50% of the work in factories. Their thoughts were of liberating humans, not impoverishing them by eliminating their usefulness. The Industrial Age had finally fulfilled its promise.
Much of the tumult of the late 1960′s arose out of what might have been an instinctual realization that machinery and humanity might not be able to co-exist peacefully.When 2001: A Space Odyssey‘s HAL decides that it doesn’t need the humans that accompanied it on a trip to Jupiter in order to probe an ancient mystery, the anxiety over HAL echoed the growing anxiety of humans for the machines that seemed to be replacing them and running their lives. Only by shutting down the artificial intelligence is Dr. Bowman (and humanity) able to move forwards to the next stage of human evolution.
HAL decides to do something else
The period from the 70′s though the early 90′s is a time of ambivalence towards robotics. Great technological strides are made by scientists working to make robots more intelligent and responsive to their surroundings. A robot lander conquered Mars when the ironically-named robot Viking Lander soft-landed on Mars during the height of the US Bicentennial celebration, providing another late reminder to the world of the US technological and scientific superiority. Eleven years later, automated (robotic) stock trading would nearly crash the entire US economy. It was a robotic danger narrowly avoided, unlike the apocalypse brought on by rogue robots controlled by “SkyNet” in 1984′s The Terminator.
But while people may have been growing increasingly wary of technology in the 1980′s (we now tend to forget that the video game industry was nearly destroyed in 1983 when the nascent industry overextended itself), the dot.com boom of the mid-to-late 1990′s did much to alleviate their anxiety, at least for a while. At the time “technology” seemed transcendent as technology entrepreneurs were rewarded for their efforts by unimaginable wealth. While technology once was scary, now, in the age of the Internet, we’d mastered it and bent it to our will. Robots might still fight, but now they fought for us in Robot Wars. Engineers and scientists were creating robots that swam, explored alien worlds, digested food, and even drove us into buying frenzies in order to be entertained by robotic antics. When Honda first introduced the jaw-draoppingly humanoid ASIMO in 2000, it seemed to many that there was nothing that technology couldn’t achieve.
In retrospect, it seems that the 8,000 mile flight of the Global Hawk — one of the first modern drones — in April of 2001 was a harbinger of a new age in robotics. Prior to Global Hawk we’d had robots who served and amused humanity in a way more-or-less compliant with Azimov’s Three Rules of Robotics. When DARPA created the Gtlobal Hawk (which, to be accurate, first took to the skies in 1998), humanity was, for the first time, creating a robot designed to facilitate the killing of human beings through surveillance. In an age suddenly plunged into cynicism and fear by the events of September 11th, 2001, Global Hawk arrived just in time to keep an eye on the increasingly dangerous world we’d found ourselves in. For the technologically based societies such as the US, the human cost of war had now become more distant. For the less technologically advanced, war was about to become much closer, sudden, surprising, and increasingly hard to resist.
Drone strike footage
Today, more than halfway through the first quarter of the 21st century, robots have become increasingly disconnected from humanity. We no longer strive to create human analogues that are more than human or optimistically look towards robots as something that can liberate humanity from drudgery in order to ascend to a higher state. Instead, robots have become de-anthropomorphized others under our control, designed to do the dirty jobs we don’t want to do. From spending hours lingering over a war zone in order to make a kill, lugging war material over rugged terrain, to sweeping our floors and taking the wheel in order to “save” us from the drudgery of commuting, watching our children, even to cruising through our arteries as “nanobots” in order to clear them of deadly plaques built up over a lifetime of indulgence, robots now serve as a way of separating humans from the consequences of their actions. Is it no wonder that in an age where experience is increasingly mediated through screens carrying ephemeral electronic traces that we have created devices to mediate reality for us?
Photo courtesy of Jamie Mellor under Creative Commons license
Jeffrey Phillips, author of Relentless Innovation, recently posted a really interesting piece on why so many companies churn out crappy products. And with the Consumer Electronics Show about to start tomorrow, it seemed like a perfect time to inject some reality into the hype. After all, when the hype-machine smoke clears and everyone wakes up from the party in Las Vegas, most of us realize that Sturgeon’s Law is true: 90% of anything created in any industry is crap.
In his examination of why so many new products are “undifferentiated and indistinguishable from the other products or services” a company offers, he focuses on what he calls “The Process of Crap,” the inputs, activities, and outputs of a company that lead to crappy products:
Strategy: It’s easy for a company to come up with a strategy, it’s another thing in the face of time, competition, and business pressures to actually follow that strategy. “The failure to live out strategy,” Phillips writes, ensures the production of crap.”
Inputs: Phillips notes that while many companies have good intentions when it comes to listening to customers and the marketplace, the reality is that “many businesses have success filters that knock down ideas or reject insights that don’t align with the existing thinking and models and processes the business has codified.”
Activities: Simply put, industrial systems and processes are designed to produce crap because they “are…imbued with the ability to sand off the unusual bits, round the edges and ensure complacency and conformity.” In other words, not only is thinking different hard, actually “doing different” is even harder.
Culture: Finally, Phillips puts the blame squarely on corporate culture which has turned many companies into “conformist, risk averse places where short term goals are paramount and satisfying the customer is king.” It’s tough to innovate and differentiate when doing things differently isn’t really rewarded.”
Good lessons for all, even if you’re not making new electronic products. But next time you’re disappointed when you try out The Next Big Thing, now you’ll know why.
Holy Guacamole! If you’ve been looking for something to do over Winter Break, check out Archive.org’s Console Living Room, an amazing collection of bajillions of arcade games from your favorite 80′s and 90′s consoles…for FREE! You’ll find games from the Atari 2600, Atari 7800, ColecoVision, Odyssey, and Astrocade systems playable via emulation right in your browser. If you’re a kid of the 80′s or 90′s, have an interest in the history of video games, or just want to challenge your middle-aged dad to a game of Donkey Kong, you have GOT to check this out. It’s probably one of the most significant contributions to computing history to come around for a long, long time.
The seemingly universal hatred of fedoras that’s cropped up in social media circles over the past year or so has always seemed somewhat inexplicable. After all, what’s so bad about a hat that millions of men used to wear every day? Of course, the real reason that they’re hated isn’t so much the hat itself as the people who wear them.
An interesting new paper published in Digital Culture & Education by Ben Abraham entitled (not surprisingly) “Fedora Shaming as Discursive Activism” sheds some surprising light on the issue of fedora-hating and how sites such as Fedoras of OK Cupid (highlighted here in a blog post) that shame particular fedora-wearers who express particularly misogynistic attitudes (and other irritating people who post pictures of themselves in the hat) actually fits well into a long history of feminist discursive protest in online communities. To quote the introduction:
In the following paper I present new research into a genre of feminist activism conducted on the social media site Tumblr, involving the curious choice to shame wearers of a certain type of hat. This choice might seem bizarre at first, but Fedoras of OK Cupid (FOOKC)1 belongs to an emerging form of feminist discursive activism that seeks to attach affective shame to the tropes and cultural objects associated with sexist and misogynistic attitudes and behaviours. Foundational research into online feminist activist communities has been done by Francis Shaw, who contextualises her research into “feminist discursive activism” within a larger challenge to theories of online publics and the problematic utopian ideals of participation.