University of Baltimore Digital Initiative

In September of 2013, the University of Baltimore launched a university-wide initiative to answer the question: “What does it mean to be living, working, playing, creating, and learning—and thriving—in this new digital world?”

Learn More
  • All your favorite old skool console games...free!

    All your favorite old skool console games...free!

  • Printer or Weapon? UK Police Investigate

    Printer or Weapon? UK Police Investigate

blog

blog

News, announcements, inspiration, information, and discussion related to the UB Digital Initiative
Read More
digital@UB

digital@UB

Information about research, scholarship, teaching, and learning that engages with the evolving digital world
Read More
resources

resources

Links to news, research, organizations, and people that examine how we live, work, learn, and play in the digital world.
Read More
events

events

News, announcements, images, commentary, and video related to Digital Initiative and interesting regional events.
Read More

[MEDIA] If you’re doubting that automation is taking over white collar jobs, meet the robo-journalist

cartoon of robotThe Los Angeles times became the first newspaper to publish a story written completely by an algorithm. The program, specifically designed to report on earthquakes, automatically assembles data about recent earthquakes and assembles it into a “story” that gives the basic facts about the quake. The program, written by journalist Ken Schwencke, also can report on crime in the city.

Read more about Robo-reporter on Slate.  
facebooktwitterredditpinterestmailby feather

[EDUCATION] Mind the Gap

[Note: this is a re-post of my recent post to the Office of Academic Innovation blog]

A recent survey of Americans and business leaders by Gallup (sponsored by the Lumina Foundation) had some pretty damning findings regarding public opinion of higher education. You can read the survey yourself (PDF download), but if you work in higher ed, I wouldn’t recommend doing so while you’re eating lunch.

Perhaps one of the most alarming findings in the survey was that less than half of Americans surveyed agreed that “college graduates in this country are well-prepared for success in the workforce” and only 14% “strongly agreed” with that statement. Worse yet (especially if you’re a recent college grad or about to become one), only 33% of business leaders agreed that “higher education institutions in this country are graduating students with the skills and competencies that [their] business needs.” In fact, employers who were surveyed were so negative about college doing an adequate job preparing graduates for the workforce that 71% of business respondents responded that “all things being equal, including experience, ability, and company fit [they] would consider hiring someone without a post-secondary degree or credential over someone with a post-secondary degree.”

Ouch.

These findings are scary enough but they get even scarier if you look at them in the context of another recent Gallup survey conducted on behalf of Inside Higher Ed. The 2014 Inside Higher Ed Survey of College & University Chief Academic Officers (PDF download) polled 842 Chief Academic Officers/Provosts from 418 public institutions, 261 private colleges and universities, and 42 CAOs from for-profit institutions. And while the findings weren’t all that surprising (20% of CAOs surveyed “strongly agreed” that they wanted to be a college president some day and only 12% agreed that the new Obama ratings initiative will help prospective students), the one that really jumped out at me was that 91% of survey respondents felt that their institution’s “academic health” (overall academic quality) was “good” or “excellent” and 89% felt that their institution was “somewhat effective” or “very effective” at “preparing students for the world of work.”

Huh. On the one hand we have the majority of Americans and employers feeling that colleges and universities are doing a terrible job at preparing students for the workforce. On the other hand we’ve got the vast majority of academic leaders reporting that they think their institutions are doing a good job preparing students for the world of work. Why such a huge perceptual gap?

We can blame some of it on economics. In a soft job market such as the one we find ourselves in now, employers can be a lot choosier about who they hire because there are so many people looking for work. Experienced, skilled workers aren’t hard to find and they’re willing to work for less than in the past, a trend supported by the stagnation in real wages that occurred during the Great Recession and continues today. The bad job market also means that employers can also ask for more out of applicants when looking to hire workers for lower-paying or entry-level jobs. In fact, the 2012 Georgetown University’s Center on Education and the Workforce found that a post-secondary degree is a requirement for an increasing number of jobs, with 2.2 million jobs created between 2007 and 2012 requiring a bachelor’s degree.

As a former employer (I ran a digital agency for about 10 years and was in charge of hiring most of the employees), these numbers make perfect sense to me. Like it or not, all employers seek to hire the best people they can get for the least amount of money possible. When I started my agency in the midst of the mid-to-late 90′s dot.com boom I had to pay through the nose for qualified web developers and designers because there just weren’t that many in the job market, and those that were looking for work were in high demand from startups flush with VC cash. Today there’s a surplus of web developers and designers looking for work and, based on anecdotal evidence I’ve gathered from friends still in the industry, they’re getting paid less than they were 10-15 years ago. Considering that many of these kinds of entry-level “creative economy” jobs require more hard skills and demonstrable talent (proven via a portfolio), it doesn’t surprise me in the least that Gallup discovered that many employers are willing to look at candidates without degrees…all things being equal, they’ll probably work for less.

But economics doesn’t necessarily explain the gap between employer dissatisfaction with college grads’ skill levels and academic leaders feeling like they’re doing a great job preparing students for the job market. Perhaps the “skills gap” isn’t as much about measurable skills as it is about perceptual differences.

If you look at what employers value in job candidates (rather than their satisfaction/dissatisfaction with applicants), the perpetual answer coming out of “what employers want” surveys is that it’s the so-called “soft skills” that lead the list (a topic my colleague Brian Etheridge posted about yesterday on this blog). In most recent “employer wants” survey published by the National Association of Colleges and Employers, “technical knowledge related to the job” was ranked 7th out of 10 traits employers were looking for in new hires. (see below)

Ranking of skills and qualities employers are looking for in job candidates
NACE 2013 Candidate Skills/Qualities Survey

 

 

 

.

 

 

 

 

 

 

 

As you can clearly see, the most desirable qualities are those typically associated with the classic liberal arts education: teamwork, problem solving, organization, communication, research and analysis.

If you look at the “skills gap” between employers and academics in the context of the NACE report, the reason for the gap starts to become clearer. Since many academic leaders came up through a more traditional liberal arts education, they’re going to hold these “softer” skills in high regard and feel satisfied with their institution’s ability to prepare students for the job market if they feel these skills are being emphasized in the curriculum. Many are also bolstered by studies such as the recent “How Liberal Arts and Sciences Majors Fare in Employment” published in January by AAC&U which found that over the long term liberal arts graduates actually do pretty well for themselves…provided they get a Masters degree at some point. Apparently, Mr. President, it turns out that art history grads actually can make a good living.

But why do employers seem to be talking out of both sides of their mouths when it comes to the skills and qualities they’re looking for in job candidates? How can Gallup find them pessimistic about higher education’s ability to prepare grads for the workforce while the NACE survey seems to say that employers aren’t looking for specific skills as much as they are looking for employees who can collaborate with colleagues, communicate effectively, and find and analyze information? If the softer skills are so important, why are studies finding higher rates of unemployment (or underemployment) among liberal arts majors who, it can be assumed, graduate with the kinds of skills employers say they’re looking for?

The answer, I think, is time. Looking back on my hiring days, I can safely say that when we needed to hire a new employee, we probably needed them yesterday. If a gap in the company opened up due to employee turnover, increased business, or someone being let go, that gap could become a gaping hole in our business if it wasn’t filled quickly. There was work to be done and it needed to be done now, not at some later date after a new hire had time to learn specific skills on the job. Every hour they spent learning (and not producing) meant another hour that couldn’t be billed. And when belts have to be tightened during a recession few businesses are going to be interested in paying the costs required to teach new employees how to do their jobs…especially if there are lots of qualified candidates to choose from willing to work for lower wages.

Educators think long-term. Employers, unfortunately, often think short-term. But neither way of thinking is necessarily wrong. Colleges and universities have traditionally considered their role to be preparing graduates for life while employers typically rarely have the luxury of thinking beyond the next quarter, especially in tough economic times . In times of economic stability, however, they can begin to look longer-term. Unfortunately we’re not there yet, and haven’t been for a while.

Perhaps the answer to closing (or at least narrowing) the skills gap is to recognize the need to strike a balance between employers’ short-term, easily-definable skills needs and the benefits of developing skills such as critical thinking, problem solving, communication, and collaboration that will benefit students over the course of their lives. Of course, many of us in higher education say that we do this now through general education requirements, “writing across the curriculum” initiatives, and other programs designed to develop the skills and qualities that define what it means to be a “college graduate” no matter what a student decides to major in. But it can be tough to maintain these ideals when everyone from parents to the President is focused on the short-term employability of graduates, the need to increase participation in STEM disciplines, and anxiety over the increasingly rapid pace of technological development and its impact on society.

Striking a balance between long-term and short-term needs can be tough in any situation, but it seems to be particularly tough when it comes to preparing undergraduates to thrive in today’s world. The structure and pace of undergraduate education was developed over a long time and is highly resistant to change. But we have to recognize that the way we educate undergraduates was, for the most part, developed during a time when the pace of change was much slower and the need for a college education much lower for those looking to enter the workforce. According to the Georgetown Center on Education and the Workforce (PDF download), by 2018 63 percent of job openings will require workers with some college education. In 1973, that number was 28%.

If employers are expecting job candidates to have college degrees and specific job skills (many of which are technology-related), it may be a mistake to think that we can teach those skills over the course of the four (or five, or six, or longer) years it takes an undergraduate to earn a degree. The pace of change is just too fast. It’s no wonder that most employers think that graduates aren’t prepared…the skills they’ve learned are obsolete by the time they graduate.

The answer to striking the balance that’s going to eliminate the skills gap perhaps lies in several avenues. A greater emphasis on experiential learning that allows students to get real-life experience in the workplace through internships, co-op programs, practica, and even apprenticeship-style training would allow students to gain valuable workplace experience. Re-thinking the structure of the undergraduate experience so that it can incorporate both the development of long-term foundational knowledge and critical skills through more traditional semester-length (or even longer) experiences and more immediate-term development of specific technical skills through shorter intensive formats that emphasize real-world applications within the students’ chosen discipline would provide greater flexibility for students and more intensive development of their skill base. And working to build bridges across disciplines would help better prepare students for a world that’s increasingly interdisciplinary and would encourage the kind of innovative thinking that’s essential to their future success.

The “skills gap” may be a combination of economic conditions, perceptions and priorities, but that doesn’t mean its not real. We can either stick our heads in the sand and keep pretending that everything’s OK (and suffer the consequences for not changing) or we can work to innovate how we educate in order to meet the challenges (and realities) of today…and the future.

 

 

 

 

 

facebooktwitterredditpinterestmailby feather

[TOOLS] Wolfram Language: This might just change everything

Stephen Wolfram, leader of the company behind Wolfram Alpha and Mathematica has just announced the immanent launch of Wolfram Language, an incredibly powerful interactive programming language with built in knowledge of the world (eg. it knows things like state capitols and English grammar) and a huge set of functions that allow people using the language to manipulate that data, bring new data into a program with just a command (eg. there’s a function that “scrapes” web sites and imports all the links on the site), and manipulate it symbolically so that it can be used throughout the programs they write. It looks pretty easy to learn, too. What all this really means is that non-techie types will have the ability to extract data from the internet, manipulate that data, use that data to make decisions, and then output the result of those decisions in a whole range of ways that include graphs, charts, geo-plots, 3D graphics, etc.. And because Wolfram language also comes with built-in “hooks” to talk to a wide variety of devices, you can even use it to trigger devices outside of your computer.

The video above (narrated by Wolfram himself) does a much better job explaining many of the language’s capabilities, but once you watch it (and if Wolfram Language lives up to what he promises), it may open up a huge flood of innovation and creativity from people who never thought they’d be able to use a programming language because so much of the stuff that takes lines and lines of code in other languages is reduced to a command or two in Wolfram Language. It should be pretty interesting to see what people do with this new tool!

facebooktwitterredditpinterestmailby feather

[TOOLS] How to do some really cool data analysis on the cheap

image of network

Network graph from Gephi

Historian Michelle Moravec recently posted a fantastic article on the Chronicle of Higher Education’s ProfHacker blog where she dishes up some pretty fantastic advice for scholars looking for new digital tools to help them in their work. She recounts the lessons she learned while working on her Visualizing Schneemann project, a work that uses different digital network analysis methods to analyze the correspondence of feminist artist Carolee Schneemann in order to get a better picture of her Schneemann’s social networks, influences, travels, etc. The end result is an amazing collection of graphs that really help viewers to better understand the artist.

You can (and should) read the article for yourself, but I wanted to highlight a few of the digital tools Moavec used in her research. Not only are these free, but they could be a huge help to anyone interested in data visualization, digital textual analysis, or even just folks who want to actually “see” their own social networks in a new way. Happy hacking!

Gephi: A free, open source, high-powered tool for working with network graphs. If you don’t know anything about network theory, don’t panic: there are a number of tutorials that walk you through the basics of creating your own graphs. It even has a built-in tool that allows you to analyze your email account so that you see a visual representation of who you correspond with and their relationships to each other.

Raw: Raw is a much more general-purpose data visualization tool than Gephi and a heck of a lot easier to use for novices. All you need to do is cut and paste your data into the Raw web site from a source like Excel or even a flat-file database, set a few parameters, and poof! you’ve got a really snazzy looking graph in vector format (meaning you can easily resize it without getting all bit-mappy) that’ll impress anyone who sees it next time you have to do a presentation or turn in a paper.

StanfordNER: This one’s a little bit more obscure and requires a bit more technical knowledge to set up, but if you’ve ever wished that you had a tool that was able to automatically identify and extract “entities” such as names, places, organization names, etc. from big chunks of text, the Stanford Named Entity Recognizer uses some pretty fancy natural language processing techniques to do just that.

TimeMapper: Probably the coolest tool on the list (though I have to admit that I kinda have a soft spot for timelines), TimeMapper allows you to create really cool map/timeline mashups for free. All you have to do is fill out a Google spreadsheet (provided) with information such as the name of your timeline entry, dates, description, and place and TimeMapper takes your info and creates a custom timeline tied to a custom Google map so that you can visualize your information not only in time (though the timeline) but in space as well (on the Google map). The Google spreadsheet at the heart of TimeMapper even automatically converts location names to latitude and longitude coordinates! You can even spiff up your timelines with images and links for additional information.

Graph showing polarized twitter topic network

Polarized Twitter network. Click the image to see more detail.

Edit: cool new study…and a bonus tool! Not long after I posted this I happened to run across this article on mapping Twitter Topic Networks on the Pew Internet and American Life site. It’s a really fascinating piece that identifies 6 distinct patterns of behavior on Twitter: Polarized Crowds (see image to the right) where groups on either side of an issue talk to each other but interact little with their opponents; Tight Crowds of “highly interconnected people” with a few others from outside the main group; Brand Clusters where well known products, services, or celebrities form a hub that attracts large numbers of people from all over the Twittersphere; Community Clusters that form a number of smaller groups with their own individual characteristics; Broadcast Networks mainly consisting of people retweeting news from major news sources; and Support Networks that form around major brands that use Twitter for customer support. The article also includes a fantastic Method section that describes in detail how the researchers used the free NodeXL Excel Template to collect and visualize the data in their study. NodeXL can do some pretty cool stuff: check out the NodeXL Graph Gallery for tons of examples.

 

facebooktwitterredditpinterestmailby feather

[EVENTS] NET/WORK Baltimore job fair: Thursday, February 20, 2014

Photo from Technical.ly Philadelphia

Reserve your free tickets now for NET/WORK Baltimore on Thursday, February 20th. If their last event (Technical.ly Philadelphia, shown here) is any indication, it looks like it’ll be a great event for job-seekers!

Local tech news hub Technical.ly Baltimore is hosting a jobs fair on Thursday, February 20th at the Emerging Technology Center on 101 N. Haven St. in Highlandtown. With over 16 technology-related firms attending (and planning on hiring people now), this event is a must-attend for anyone looking for a job in web design and development, information systems, cybersecurity, game design and development, technology consulting, programming, mobile app development, marketing/advertising, or e-commerce. A number of Baltimore-based non-profit technology community groups will be in attendance, too including Accelerate Baltimore, Betamore, Digital Harbor Foundation, and Girl Develop It Baltimore.

Tickets are usually $5, but students with a valid ID get in for free. Check out the event site to learn more and reserve your ticket before the event sells out.

Some of the firms planning on recruiting at NET/WORK Baltimore include:

 

 

facebooktwitterredditpinterestmailby feather

[MOBILE] Baltimore in top 10 of US cities with most smartphone users

Graph of results from Nielsen survey

Baltimore ties for 5th in the US for highest percentage of smartphone users.

If you needed another reason to feel smart for living in and around Baltimore (and for going to UB!), here’s one: Baltimore ranks #5 among US markets for smartphone users. According to a recent survey by Nielsen (famous for tracking TV and radio usage),  72% of Baltimore-region mobile phone users have smartphones, slightly above Chicago,IL’s 71% and Miami, FL’s 73%. The top smartphone-using city in the US was Dallas, TX, with 76% of mobile users sportin’ smartphones. On average, 67% of US mobile subscribers use smartphones.

facebooktwitterredditpinterestmailby feather

[TOOLS] Walters Art Museum opens up digital collection with new free API

 

Image from inside Walters Art Museum

New API allows digital content creators to incorporate elements of the Walters’ collection of 10,000 objects into their own programs

The Walters Art Museum in Baltimore just announced the public release of an API (Application Programming Interface for you non-coders out there) that allows creatively-minded programmers and artists to access their collections via a wide range of parameters including items from the collection, their location in the Museum, specific exhibitions, geography or origin, and many other parameters (name, keyword, catalog ID, etc.).

“So what?” you might be asking. Fair question. Basically this new API allows people to build custom web-based apps (or, with a language like Processing, even stand-alone apps) that draw their content from the collection or allow users to make customized searches through the Walters’ huge collection of 10,000+ objects that literally spans thousands of years of human history. Rather than have to “scrape” content off of the Walters’ site and risk broken links and difficult-to-manage code, people who want to incorporate art from the collection can query the database directly. Better still, everything’s available under a public license that merely requires those who tap into the database to credit the Museum.

Should be pretty interesting to see what people come up with!

facebooktwitterredditpinterestmailby feather

[TRENDS] What technology will do to tomorrow’s jobs…and what to do about it.

19th century engraving of Luddites smashing factory equiptmentA new article in The Economist takes a hard look at how technology-driven automation will impact jobs in the next 20 years. Their conclusion? Just as the Industrial Revolution “swept aside” the livelihoods of most of those who made their living as craftspeople who made things by hand, the Digital Revolution will eliminate ever-increasing numbers of jobs considered “white collar” (and automation-proof) today. The result will be increasing income inequality and rising unemployment. The disruptions of recent years provide a model for where things may be headed in the future:

[We're looking at] history repeating itself. In the early part of the Industrial Revolution the rewards of increasing productivity went disproportionately to capital; later on, labour reaped most of the benefits. The pattern today is similar. The prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers. Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from around 9% in the 1970s to 22% today. Unemployment is at alarming levels in much of the rich world, and not just for cyclical reasons. In 2000, 65% of working-age Americans were in work; since then the proportion has fallen, during good years as well as bad, to the current level of 59%.

So what can be done? The answer, according to The Economist, is (in large part), education. The education system itself needs to be overhauled:

…schools themselves need to be changed, to foster the creativity that humans will need to set them apart from computers. There should be less rote-learning and more critical thinking.

And what if we don’t change things? The lessons of the past tell us that maintaining the status quo will lead to a world of even greater disruption. Only by beginning to change today will we be able to head off the inevitable revolution(s) of tomorrow:

Innovation has brought great benefits to humanity. Nobody in their right mind would want to return to the world of handloom weavers. But the benefits of technological progress are unevenly distributed, especially in the early stages of each new wave, and it is up to governments to spread them. In the 19th century it took the threat of revolution to bring about progressive reforms. Today’s governments would do well to start making the changes needed before their people get angry. 

facebooktwitterredditpinterestmailby feather

[HISTORY] If we’re in the future, where’s my personal robot assistant and mind/computer link?

picture of 2 B.O.B. robots and an attached picture of woman with headband using MindLink

B.O.B. (1983) and MindLink (1984) at the Consumer Electronics Show

Here are two pictures from the Consumer Electronics Show taken in 1983 and 1984…31 and 30 years ago, respectively. The picture on the left is of B.O.B., short for “Brains On Board” (clever, huh?), billed as a “personal robot assistant” from creator Androbot, one of the first companies funded by Atari creator Nolan Bushnell‘s Catalyst Technologies Venture Capital Group (one of the first venture firms to focus on high tech in the way we think of it today). B.O.B. was designed to be the smarter younger brother of the earlier Topo robot. Topo was sold as “a mobile extension of your personal computer,” and (no surprise) required an external computer to do the heavy-duty processing it needed to roll around. Topo sported some pretty impressive specs for the time: 3 8031 microprocessors, wireless communication to your home PC via infrared link, 5 slots for additional sensors,  and even limited speech capability.

handout for Topo personal robot

Topo handout

Unfortunately as a “personal assistant” Topo was pretty lame. He really didn’t have much in the way of any sort of attached manipulator (arm, hand, etc.) and didn’t come standard with any sensors, so he really couldn’t do more than roll around on his two angled wheels (check out Topo’s bottom in the picture above) while being remotely controlled by a person sitting at their Apple ][+ personal computer. Topo might have made for a cool party trick but he was a long way from The Jetsons Rosie.

image of robot serving cake to Jetson family

Rosie the Robot from The Jetsons

The right half of the CES picture at the top of this post shows a woman using MindLink, a never-released controller for the wildly popular Atari 2600 video game console system. Billed as a controller that let you use your mind to control the action on the screen, in reality MindLink used sensors to read muscle movements in the player’s forehead. Unfortunately a gaming system basically controlled by one’s eyebrows never really took off because early testers complained of headaches induced by forehead muscle strain. However, according to this article from the Atari Museum site the headband controller could be strapped to a “bicep or thigh” and users could be trained to use those muscles to control simple Pong-like games. Fun, huh?

image of teenager using mindlink

MindLinkin’ it solo, 1984 style

 

So where have we gotten in 30 years? Where are the robots we were promised? Why aren’t we “jacking in,” using our minds to control our computers rather than typing on keyboards that haven’t changed much since the first “dumb” CRT terminal hit the market  back in the 1970′s?

Possibly a lot closer than many of us realize.

While household “servant”-type robots aren’t exactly everywhere, special use home robots have become pretty common. Manufacturer iRobot has sold over 8 million floor-cleaning robots since the Roomba was introduced in 2002 and millions of other “domestic robots” have been sold over the years from a variety of manufacturers for tasks such as floor cleaning, pet care, and even ironing.

But the real robotic action has been taking place outside the home. Unmanned Aerial Vehicles (“UAV’s” or “drones) have taken over many of the roles traditionally relegated to human-piloted aircraft in the military and are now starting to become more common in the civilian world . Amazon has even announced that they’re going to start limited drone-based delivery soon, though we’ll have to wait and see what that looks like.

Land-based robots have also come a long way. Lead by innovative firms such as Boston Dynamics (recently acquired by Google), the new batch or autonomous ground-based robots are a far cry from ol’ B.O.B. of 30 years ago.

Boston Dynamics’ “WildCat”

Boston Dynamics’ “Petman”

Honda’s ASIMO

Besides the creepy “uncanny valley” feeling that comes over many of us when we see these increasingly life-like robots, one of the most interesting developments has been the move in robot development away from the idea of the general purpose, human-like  “mechanical men” of the past which strove to replicate humanity to today’s development of specialized robots designed to do things that people can’t do in forms that bear little resemblance to humans. Sure, ASIMO might resemble Verne Troyer in a space suit, but it’s pretty clear from its leg joints that we’re not looking at a diminutive person. And yes, Boston Dynamics’ “Petman” does creepily resemble a soldier in full CBW (Chemical and Biological Weapons) gear, but that’s because it’s been designed to test clothing designed for humans…not because Boston Dynamics was trying to replicate a “man without a heart” like the TinMan from the Wizard of Oz. Today, form arises from function: floor sweeping robots are flat discs because they work better that way: Rosie the Robot of The Jetsons would have just as hard a time vacuuming under furniture as the humans she was modeled on.

In many ways the development of robotics over the centuries from concept to mechanical automaton to the drones of today tells us a lot about the desires and aspirations of humanity. Homer called them “Golden Servants,” robots better than humans, forged by the gods to serve the gods. Golems of the early Talmud could only be created by those closest to God.[editor's note: added after the first comment]  Early Chinese artisans developed complex, human-like automata designed to mimic humans by being better at humans, even when playing as an orchestra. Leonardo used his knowledge of mechanics and human anatomy to develop robot knights powered by linkages modeled on human tendons and muscles. Artisans of the 16th and 17th centuries carried on Leonardo’s fascination with humanity and the power of science and technology to build increasingly complex “model humans” (or “androids” as German alchemist Albertus Magnus dubbed them later in 1727) such as the “mechanical monk” built around 1560. Automata of the 18th century reflected the Humanist spirit by attempting to re-create artificial people who could write, draw, sing, and play music. The Industrial Age brought about its own mechanical marvels designed to do work better and faster and cheaper and more productive than the troublesome humans toiling in factories or unreliable human “calculators” churning out tables of numbers by hand. As the 20th century dawned and the first World War demonstrated what happens when mechanization is applied to destruction, robots (mostly fanciful creations in theater or film) served as reminders about what happens when humanity is removed from life. Azimov’s “Three Rules of Robotics,” first published in 1940 as the Nazis began to grind up Europe, placed on robots a code of ideals designed to eliminate the chance of robot/human conflict:” 1) a robot may not injure a human being, or, through inaction, allow a human being to come to harm; 2) a robot must obey orders given it by human beings except where such orders would conflict with the First Law; and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second law.”

From Metropolis, 1927

The postwar conception of robots (up until the late 1960′s) were all about optimism, triumphant science, and the ideal of never-ending, universal prosperity. After the horrors of World War II, we wanted to create human-like things that could do the things that humans shouldn’t have to do (cue Rosie the Robot again, always eager to do the dirty housework that Jane Jetson never wanted to do). Science –which had created the atomic bomb, conquered the sea and the skies, and had begun to conquer even Heaven (space) itself– seemed inexorable and omnipotent. When computer scientist Alan Turing defined what was later to be called “artificial intelligence” through the Turing Test (a test that can only be conquered by a computer becoming indistinguishable from a human), the idea that science could create a sentient being wasn’t all that far off. When George Devol and Joseph F. Engleberger met over martinis and decided to form the first robot company (Unimation, which created the first industrial robot, Unimate), they were motivated by a desire to free humans from the drudgery of “putting and taking,” tasks that made up 50% of the work in factories. Their thoughts were of liberating humans, not impoverishing them by eliminating their usefulness. The Industrial Age had finally fulfilled its promise.

Much of the tumult of the late 1960′s arose out of what might have been an instinctual realization that machinery and humanity might not be able to co-exist peacefully.When  2001: A Space Odyssey‘s HAL decides that it doesn’t need the humans that accompanied it on a trip to Jupiter in order to probe an ancient mystery, the anxiety over HAL echoed the growing anxiety of humans for the machines that seemed to be replacing them and running their lives. Only by shutting down the artificial intelligence is Dr. Bowman (and humanity) able to move forwards to the next stage of human evolution.

HAL decides to do something else

The period from the 70′s though the early 90′s is a time of ambivalence towards robotics. Great technological strides are made by scientists working to make robots more intelligent and responsive to their surroundings. A robot lander conquered Mars when the ironically-named robot  Viking Lander soft-landed on Mars during the height of the US Bicentennial celebration, providing another late reminder to the world of the US technological and scientific superiority. Eleven years later, automated (robotic) stock trading would nearly crash the entire US economy. It was a robotic danger narrowly avoided, unlike the apocalypse brought on by rogue robots controlled by “SkyNet” in 1984′s The Terminator

But while people may have been growing increasingly wary of technology in the 1980′s (we now tend to forget that the video game industry was nearly destroyed in 1983 when the nascent industry overextended itself), the dot.com boom of the mid-to-late 1990′s did much to alleviate their anxiety, at least for a while. At the time “technology” seemed transcendent as technology entrepreneurs were rewarded for their efforts by unimaginable wealth. While technology once was scary, now, in the age of the Internet, we’d mastered it and bent it to our will. Robots might still fight, but now they fought for us in Robot Wars. Engineers and scientists were creating robots that swam explored alien worlds, digested food, and even drove us into buying frenzies in order to be entertained by robotic antics. When Honda first introduced the jaw-draoppingly humanoid ASIMO in 2000, it seemed to many that there was nothing that technology couldn’t achieve.

In retrospect, it seems that the 8,000 mile flight of the Global Hawk — one of the first modern drones — in April of 2001 was a harbinger of a new age in robotics. Prior to Global Hawk we’d had robots who served and amused humanity in a way more-or-less compliant with Azimov’s Three Rules of Robotics. When DARPA created the Gtlobal Hawk (which, to be accurate, first took to the skies in 1998), humanity was, for the first time, creating a robot designed to facilitate the killing of human beings through surveillance. In an age suddenly plunged into cynicism and fear by the events of September 11th, 2001, Global Hawk arrived just in time to keep an eye on the increasingly dangerous world we’d found ourselves in. For the technologically based societies  such as the US, the human cost of war had now become more distant. For the less technologically advanced, war was about to become much closer, sudden, surprising, and increasingly hard to resist.

Drone strike footage

Today, more than halfway through the first quarter of the 21st century, robots have become increasingly disconnected from humanity. We no longer strive to create human analogues that are more than human or optimistically look towards robots as something that can liberate humanity from drudgery in order to ascend to a higher state. Instead, robots have become de-anthropomorphized others under our control, designed to do the dirty jobs we don’t want to do. From spending hours lingering over a war zone in order to make a kill, lugging war material over rugged terrain,  to sweeping our floors and taking the wheel in order to “save” us from the drudgery of commuting, watching our children, even to cruising through our arteries as “nanobots” in order to clear them of deadly plaques built up over a lifetime of indulgence, robots now serve as a way of separating humans from the consequences of their actions. Is it no wonder that in an age where experience is increasingly mediated through screens carrying ephemeral electronic traces that we have created devices to mediate reality for us?

Double Robotics iPad-based telepresence robot

 

 Next: Jacking into the mind-machine interface

 

 

facebooktwitterredditpinterestmailby feather

[BUSINESS] CES starts tomorrow: here’s why 90% of the products will probably be crap

Image of scrap yard sign with

Photo courtesy of Jamie Mellor under Creative Commons license

Jeffrey Phillips, author of Relentless Innovation, recently posted a really interesting piece on why so many companies churn out crappy products. And with the Consumer Electronics Show about to start tomorrow, it seemed like a perfect time to inject some reality into the hype. After all, when the hype-machine smoke clears and everyone wakes up from the party in Las Vegas, most of us realize that Sturgeon’s Law is true: 90% of anything created in any industry is crap.

In his examination of why so many new products are “undifferentiated and indistinguishable from the other products or services” a company offers, he focuses on what he calls “The Process of Crap,” the inputs, activities, and outputs of a company that lead to crappy products:

  • Strategy: It’s easy for a company to come up with a strategy, it’s another thing in the face of time, competition, and business pressures to actually follow that strategy. “The failure to live out strategy,” Phillips writes, ensures the production of crap.”
  • Inputs: Phillips notes that while many companies have good intentions when it comes to listening to customers and the marketplace, the reality is that “many businesses have success filters that knock down ideas or reject insights that don’t align with the existing thinking and models and processes the business has codified.”
  • Activities: Simply put, industrial systems and processes are designed to produce crap because they “are…imbued with the ability to sand off the unusual bits, round the edges and ensure complacency and conformity.” In other words, not only is thinking different hard, actually “doing different” is even harder.
  • Culture: Finally, Phillips puts the blame squarely on corporate culture which has turned many companies into “conformist, risk averse places where short term goals are paramount and satisfying the customer is king.” It’s tough to innovate and differentiate when doing things differently isn’t really rewarded.”

Good lessons for all, even if you’re not making new electronic products. But next time you’re disappointed when you try out The Next Big Thing, now you’ll know why.

facebooktwitterredditpinterestmailby feather