Archive for Web 3.0
by Celine Roque
In order to make a good impression, one has to be polite. So, I typed in a friendly, “Hi”, and to my surprise, it replied, “Hello, human.” Next, I asked it where it is located. “I live on the Internet.” Fair enough, I suppose. Lastly, I inquired about its nature and it told me, “I am a computational knowledge engine.”
I would’ve liked to carry on a “conversation” with it, but Wolfram|Alpha’s human discourse module has not yet been fully developed. However, I was able to get great information on constellations, compare the GDP of various countries, create a timeline of famous Roman emperors, and get up-to-date weather information, automatically localized to my area. I’m sure that its linguistic abilities will be improved in the near future, though, along with a host of upgrades in keeping with its lofty ideals:
“Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.”
If you haven’t heard of it yet, this ambitious new tool is the brainchild of scientist Stephen Wolfram, and is unfortunately being touted as the next Google-killer. Unfortunate, because the two have different methods for accomplishing different goals, and therefore shouldn’t be compared as if they were mortal enemies. Google is used to find other websites pertinent to a search term among its enormous index. Meanwhile, Wolfram|Alpha generates new answers based on inquiries, computed real-time from relevant public data. One has a web crawler to broaden its reach, the other has a centralized curation process to comb through data. Both can be very useful in their own ways.
The first thing you’ll notice about Wolfram|Alpha is its clean, minimalist design. It has a text box on top of the page to type your queries in, be it a question or a mathematical formula. It offered fast answers for almost every inquiry I made, and as a data geek I appreciate the graphs what would sometimes accompany the results. In a video on their blog, they said that they are currently averaging at 70% satisfaction rate in terms of giving meaningful answers to queries. The fact that it did not crash during launch last May 18 – even after all the hype it generated – is a testament to their preparedness and awesome infrastructure.
Wolfram|Alpha is great for comparisons: stock fluctuations, box office grosses, demographics by country, timelines of famous people, chemical properties, etc. The technology behind it has an immense potential for custom business applications, enabling managers to pull up and compare sales trends, fiscal reports, and employee retention in a snap. In this sense, I’m not worried about Wolfram|Alpha’s long-term financial viability. I think it can find a way to sustain itself with its focus towards providing a valuable tool for businesses, engineers, college students, scientists, economists, and the like.
Wolfram|Alpha, striking as it is, is not new or unique (see sCloud), but it’s the first of its kind to create mainstream buzz and its approach is quite promising. The results it churns out are informative, though a bit lacking in depth. At the moment, only US data are abundant while those for most other countries are scarce. Unlike Wikipedia, it has no specific attribution of source per statement/datum. Instead, there is a link at the bottom which shows a general list of all the references for the inquiry. This makes it hard to do quick checks on accuracy.
This tool is best for objective analysis, so you won’t find film reviews, news & opinions, historical accounts, and similar articles here. Neither is it suited for entertainment and other consumer purposes. As of this time, Wolfram|Alpha is said to contains 10+ trillion of pieces of data, 50,000+ types of algorithms and models, and linguistic capabilities for 1000+ domains. This may seem a lot, but it’s really barely scratching the surface of recorded information. I’m not quite sure if they use a form of crawler to automate data gathering, but the curation process being centralized and needing human supervision severely limits the speed of data acquisition. With only less than 100 people in their staff, I wonder how timely and accurate they can add new information, with lots more being generated all the time.
Wolfram|Alpha is an incredible resource for quickly getting organized, factual information, but it’s not for everyone. It’s tag line alone (“computational knowledge engine”) will be enough to make some people scratch their heads. In time, it will find its niche, which will likely be a profitable one if they can manage to keep a high quality of service. Although Wolfram|Alpha has a blog and a community, it’s not open to the general public for editing unlike Web 2.0 sites. This is a walled garden, with experts in every field trying to maintain the purity of their data.
If you haven’t tried Wolfram|Alpha, I’d advise you watch Stephen Wolfram’s helpful video primer. It’s hard not to be blown away by the possibilities.
by Jenny Ambrozek
Subtitled “Tapping Online Social Networks to Build Better Products, Reach New Audiences, and Sell More Stuff” the book is a must read, and especially useful as a primer for those still needing to understand the fundamental changes in doing business as the Internet has matured from Web 1.0 to:
“an entirely new level with Web 3.0- an era that is entirely about innovation and collaboration.” (Foreword page ix)
An excellent overview of the book, in author Clara Shih’s own words, is in 2 parts at the Entrepreneur’s Journeys blog . Not surprizingly the book’s home page is on Facebook and 24 x 5 star Amazon reviews indicate the book’s value.
The book section titles– starting with “A Brief History of Social Media’ through “Transforming the Way We Do Business’ to “Your Step-By-Step Guide to Using Facebook for Business”– reveal the key themes. Reflecting the author’s hands on experience as the developer of FaceConnector and head of Enterprise Social Networking Alliances and Product Strategy for Salesforce, the book is filled with lived experiences of companies using social networking to “build better products, reach new audiences and sell more stuff.”
If there are gaps in the book they reflect the state of the industry. For example, “The ROI of Social” is addressed in half a page (205) beginning:
“Understandably, a large number of you are focused on ROI and might feel frustrated that there has been no clear quantifiable data around ROI”
and concludes suggesting;
“ROI will become much more quantifiable and standardized”.
Have you read “The Facebook Era?” What did you take away?
~ Jenny Ambrozek
by Celine Roque
Last week I touched on the topic of the future of the Web – today, a few more prognostications from prominent players in the field. In a recent guest post over at TechCrunch, Salesforce.com’s CEO Marc Benioff talked about his vision of Web 3.0. In a nutshell, it’s a vote of confidence for a paradigm shift from Software-as-a-Service to Platform-as-a-Service.
“The new rallying cry of Web 3.0 is that anyone can innovate, anywhere. Code is written, collaborated on, debugged, tested, deployed, and run in the cloud. When innovation is untethered from the time and capital constraints of infrastructure, it can truly flourish.”
Sounds good, especially for developers. However, to put it all in perspective, platform-as-a-service is exactly where his company, Force.com, is heading. There’s a fine line between prediction and self-promotion, so it’s best to take his views with a few grains of salt. Tim O’Reilly left his own comment on Marc’s post:
“Hmm — if web 1.0 was the web as content, and web 2.0 was the web as platform, how exactly does web as platform count as web 3.0? When people ask me what might qualify for the 3.0 monicker (assuming you want to go there – Web 2.0 was a moment in time, a way of saying “the web ain’t dead” after the dot com bust, not a version number), I say the one thing that might qualify is the rise of cloud applications that are primarily experienced on (and driven by) mobile interfaces.”
Lastly, Jason Calacanis of Mahalo came up with his own take:
“Web 3.0 is defined as the creation of high-quality content and services produced by gifted individuals using Web 2.0 technology as an enabling platform.”
Think Digg or YouTube with censors to snuff out “low-quality” content – a clash of values and aesthetics. It’s not surprising that the founder of a human-powered search engine would look at Web 3.0 as people-driven instead of technology-driven. What we see is often what we want to see. As with Bill Gates, history will either vindicate him, or…
After O’Reilly’s vision of “Web 2.0″ spread like wild fire, I suppose it’s inevitable that people would begin to speculate about what would come next: Web 3.0, 4.0, 5.0, and so on, ad infinitum (or ad nauseam?). Whether you view it as a purely marketing term or as a useful label for the zeitgeist, it seems these catchphrases are here to stay, for better or for worse. That is, until we can improve on the terminology (and please let it be sooner rather than later). Ideas, anyone?
by Celine Roque
Predicting the future of technology is a hazardous thing. Bill Gates once wrote, “What I’ve said that turned out to be right will be considered obvious, and what was wrong will be humorous.”
No doubt he was thinking about an infamous quote about computer memory attributed to him that went “640KB should be enough for everyone.” Although he has since denied ever saying this, I’m afraid he’ll never live that one down. Still, there are those who continue to voice their opinions on upcoming trends. Here are some of the things predicted to happen for Web 3.0.
Reed Hastings, founder and CEO of Netflix: “Web 1.0 was dial-up, 50K average bandwidth, Web 2.0 is an average 1 megabit of bandwidth and Web 3.0 will be 10 megabits of bandwidth all the time, which will be the full video Web, and that will feel like Web 3.0.”
Although the increase in bandwidth, along with lower costs, certainly helped spur growth on the Internet, his definition would mean that certain parts of the world are still stuck at Web 1.0, while Japan is already on Web 3.0 or even 4.0 – but isn’t there only one World Wide Web?
Eric Schmidt, a Google stalwart, once said, “If I were to guess what Web 3.0 is, I would tell you that it’s a different way of building applications… My prediction would be that Web 3.0 will ultimately be seen as applications which are pieced together. There are a number of characteristics: the applications are relatively small, the data is in the cloud, the applications can run on any device, PC or mobile phone, the applications are very fast and they’re very customizable. Furthermore, the applications are distributed virally: literally by social networks, by email. You won’t go to the store and purchase them… That’s a very different application model than we’ve ever seen in computing.”
While interesting, Tim O’Reilly already defined Web 2.0 as “web as a platform” for applications, so to me, at least, it feels like an evolutionary rather than a revolutionary change.
For his part, Tim Berners-Lee, thinks of the Semantic Web as the future – machines will be able to understand data and give meaningful responses. In an article in Scientific American, he said it’s “not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”
It’s a promising concept, but it’s also an ambitious project with lots of challenges. There are already several projects geared towards this, so it’s likely to happen. The questions is, will it see enough traction in the mainstream for it to be considered a success?
by Matthew Hodgson
While organisations continue to struggle with adopting E2.0 through Web 2.0 products, some are looking beyond and asking the question “what’s next?”. If Web1.0 was about communication, and Web 2.0 about collaboration, what should we be doing now in order to prepare for the future demands of users and the workplace, and have the upper-hand on competitors?
To prepare for the future we need to understand the evolution of the web, and Gary Hayes suggests that it is moving toward a more immersive environment.
We’re just starting to see that now with Web 2.0 — pushing the boundaries of information sharing from centralised and controlled by organisations to decentralised, collaborative and controlled by consumers. In essence, this means:
- Web 1.0 - unidirectional and “push”. E.g. traditional brochureware-style websites
- Web 2.0 – interactive – “push” + “pull”. E.g. Social computing websites like MySpace, Wikipedia, and Facebook
- Web 3.0 – immersive. E.g. 3D Virtual Worlds and ubiquitous computing
- Web 4.0, 5.0 … the semantic world with intelligent agents and adaptive information
The real benefits of Web 2.0 and Web 3.0 (as defined by Gary) are just starting to emerge, with online spaces to work and share information, technology that truly supports information anytime and anyplace. Society is witnessing the emergence of digital natives who are born ‘technology aware’ and expect to be able to use the same technology they take for granted in their social lives in the work environment. And while some may have thought that the notion of a truly semantic web was dead and burried, the problems with database interaction and data interoperability to provide true intelligent context to data and information in online environments has raised the issue again:
How can we prepare and provide for the future of the web?
Annie Rowland-Campbell, a researcher with FujiXerox, suggests that we need to start preparing our data systems for the future by using semantic technologies. That is, separating out our data and metadata, and introducing ontologies to articulate the relationships between data sources, and their relationships, in order to provide true context and meaning to information.
Why separate out these elements? Simply put, traditional database design isn’t scalable to the extent we need to provide for intelligent agents and adaptive information of Web 4.0 and beyond. Even if we’re only dealing with data exchange between 6 stores, for example, to provide a true and complete context of information to users, we still need 24 points of integration.
If we use semantic technologies, and introduce an ontological layer, suddenly we reduce the overall design complexity from 24 to 6.
For organisations in Europe, where language is the common barrier to information exchange, this approach is already reaping rewards, and is where the ISO/IEC 13250:2000 standard Topic Maps was born. The approach turns our original concept of the semantic web, a layer on top of the current web that annotates information in a way that is “understandable” by computers, into something that is actually able to be fully-realised to meet the needs of Web 2.0′s future.
by Jon Husband
Of course it’s silly .. but this post by Hugh Macleod titled “Buckets” got me thinking …
If nature was designed like today’s business and software, water would trickle down the valley in buckets, from bucket to bucket.
We have wireless in coffee shops, Skyping on transatlantic flights, Blackberries, smartphones and laptops wherever we go – why not let (server based) systems do the delivery of work-orders, run the events, do the transactions and capture the data? Why not have the flows defined with loops and warts and all ready to be refined daily as the organisations learns and grows?
“Anataxonomy” and “Flow”, combine those two principles and use the wonders of technology accordingly.
So what does this mean? Sure, we’re already getting used to the idea of big commercial Open-Source software companies like Spikesource. But what about non-software? Open-Source Exxon’s? Open-Source General Motors’s?
This is when “Flow” starts getting REALLY important.
Smart knowledgeable people who have studied deeply the issue of why hierarchy seems such a durable concept tell us to get used to it … they say that there are good reasons why hierarchies thrive, even in the face of increasing flows of information and spreading forms of networked semi-transparency.
But hierarchies don’t have to remain static … and this is one of the big deficiencies in current models and with the existing tools of organizational design. Think about it. How often are there reorganizations, changes to departmental structures, downsizing, mergers or acquisitions – and the org chart gets tossed up in the air like a set of pick-up sticks. In the case of larger organizations, the “pick-up sticks” always come down in highly-organized, very neat looking boxes with straight lines that essentially state … “this is the right design .. this time we’ve got it” !
Until the next change.
Really, organizational structures are basically a rolling flow of change. Why the assumption of stability, of more-or-less static structure ? In my opinion, it’s just that many executive and management types don’t really like the feelings of messiness and control based only on engagement and willingness that accompany the conditions of continuous change.
So … what if work meant that at different times and for different projects, you could get *tagged* with different tags for different skills, and *linked* with other relevant of pertinent skill and personality *tags*, and so on ? Then, these new-style indicators (of capability) could be combined with availability / scheduling optimization software, and you’d have the basic format for a new form of organization chart.
Hierarchies could be developed at a specific time, for as long as may be necessary, and may involve different people or peoples depending upon the situation, the problems and the desired or hoped-for outcomes. So too for teams and purpose-focused networks of skills, abilities, competencies, willingness and availability.
If you stop and think about it for a moment, you can almost *feel* that this would probably seem more natural and more probably effective. But, we have a large legacy system in place.
Back in the mid-1980′s there was a brief eruption of self-managing teams and what was called socio-technical work systems, where some of these types of issues were addressed – except that then the concepts of *knowledge work*, and mechanisms for manipulating information flows, like tags and hyperlinks, were only really fringe ideas.
Not anymore … but the org charts and the performance management and compensation practices are still (generally) what were used 30 and 20 and 10 years ago.
How much longer will yesteryear’s tools continue to suffice ?
This is basically the question Gary Hamel addresses in his recent book The Future of Management.
The Web is a near-ideal mechanism in which to culture new strains of social organization. From Craigslist to MySpace to FaceBook to Second Life to eHarmony, from instant messaging to podcasting, blogging, video chat and virtual worlds, the Internet is radically changing the ways in which people find romance, manage friendships, share insights, learn, build communities, and more.
For the moment, though, most of this joyous and frenzied experimentation is taking place outside the plush-carpeted hallways of the corporate old guard.
I find this ironic.
While no company would put up with a 1940′s-era phone system, or forgo the efficiency-enhancing benefits of modern IT, that’s exactly what companies are doing when they fail to exploit the Web’s potential to transform the way work of management is accomplished. Most managers still see the Internet as a productivity tool, or as a way of delivering 24/7 customer service. Some understand its power to upend old business models. but few have faced up to fact that sooner or later, the web is going to turn our smoke-stack management model on its head.
Powered by Qumana