Web 2.0 social issues
Monday, August 13, 2007
I was at a bookstore a couple of days back and saw this little book which caught my attention. I don't remember exactly the title, but it was basically about the possible flaws and problems with Web 2.0. While I believe Web 2.0 isn't perfect, something about the book just didn't seem to add up for me. It concentrated on the social aspect of Web 2.0 rather than the technological aspect so I'm not looking at it from that perspective. The problems stated are 'social' or 'ethical' issues which I somewhat disagree with.
For starters, Web 2.0 refers to a change in the internet scene. Previously, it was all about content providers which supplied data in one direction; regular people accessing the internet for data got it from the 'source'. Web 2.0 however, places the ball in the court of the average internet user. It's a community based approach, and 'content' can be provided by any user. While Web 2.0 is referring specifically to the internet and the web-based communities as well as hosted services that surround it, there's also a 'broader perspective' with which one can view this strange phenomenon.
In the beginning, end users had to wait for the source to produce something before they could use or consume it. This happened just about everywhere, because unless manufacturers produced a brand new product, end users or consumers would not be able to see it. Let's say that as an end user I suddenly have this brilliant idea, but unlike the established manufacturers, do not have the ability to produce it. I could try to sell the idea to the manufacturer, but even then, it was technically still a one way flow of the ideas, with the manufacturer again being the one having the responsibility to produce better ideas and products. With this new community based approach, the end user or consumer community can effectively contribute towards an idea or project. This is the core for which Web 2.0 based software and products work on. Websites like Digg and Wikipedia rely on the collective community intelligence instead of just one person or entity.
The book I read basically focussed on one important aspect of Web 2.0 and expounds on why it is potentially problematic. The book gives an example of blogging, where every end user is now a journalist, without qualifications possibly, writing whatever they want about any topic out there. The problem is that with so many sub-standard blogs out there, one has to crawl through all the garbage in order to find the one or two rare gems that are actually good blogs worth the time reading, or so the book claims. It equates this to giving an infinite number of monkeys an infinite number of typewriters, and finding, finally, a Shakespearean piece somewhere.
While I can see where the author's coming from, there is a very significant difference that Web 2.0 has from that analogy. The main complaint seems to be that a very high percentage of those who contribute content in a Web 2.0 platform contribute rubbish. Substandard journalism or ideas without any verification plague the sea of information and without a body to determine the quality or validity of this content, it is inherently garbage. This seems like a valid complaint, until one takes into account the fact that this 'body' which determines whether or not the content is of a certain standard, consists ALSO of monkeys. Putting it into the monkey analogy, whether or not a 'product' is of Shakespearean standard would be determined by a monkey, or an entity comprising of monkeys as well. Furthermore, the only reason Shakespeare's pieces are recognised to be of high quality, is because the MAJORITY of consumers deem it to be so.
The fallacy here is to assume that group intelligence, that is the collective thoughts of the community, is lower than that of an exceptional individual, or group of individuals. While this may appear to be true, the Web 2.0 communities are by and large, scientific, inquisitive people who question everything critically and analytically. This very important trait makes the group intelligence actually much better than what any individual or smaller group can achieve.
This group intelligence is NOT the same as the mindless 'herd mentality' projected by certain groups. Ignorance is at the heart of this apparent stupidity of the group; They accept everything shared wholesale without critically thinking through each and every issue involved. When group intelligence is coupled with this kind of thinking, a robust platform for quality content is established.
The book expounds further about how Wikipedia is essentially a huge fallacy because there is no neutral governing body determining what goes in and what doesn't. The claim is that since any average user can make changes to it, it is therefore unreliable and 'garbage'. This is not quite true, because the MAJORITY of people listen to facts and reasons, not blatantly accept what is shown. Having millions of people patrol it makes it more reliable than any quality control 'body' can ever establish. Sure, you could try to change an entry in Wikipedia, which may be motivated by biasness, but who is to say that having a 'governing body' determine content would absolutely ensure that there would be no partiality or fallacies in the content? In the end, any unproven 'fact' would be viewed skeptically, and any verifiable fact would be accepted as mainstream, further increasing the collective intelligence.
Assuming in the worst case that collectively, people believed the 'wrong' thing, and evidence proving the contrary is found, then collectively, the community would 'learn' and adapt, because again, it's not about the size of the group, but the level of critical thinking that determines the intelligence level of a community.
Web 2.0 is an online realisation of democracy, where the majority has the right to determine and share content. The truth is, low quality content would be largely maligned and forgotten in the huge mass of other more interesting, quality content, contrary to what some people would like to believe. Sure, there are asinine content which interest a lot of people and become popular, but end users are not as stupid as to believe that every popular news or story or content is one with significance or of great quality. Sure, there are some important content which may slip through the cracks of the democratic system and remain buried never to be found. However, it is by and large a better system which promotes more quality content than a singular directional system. Plenty of established content providers have produced crap content before, and there's no doubt that plenty of average users managed to produce quality content.
Undoubtedly, having a governing body would impose responsibilities on them which need to be taken seriously, something that is largely ignored in the Web 2.0 platforms. This issue however, can be looked at from a different perspective. It is now throwing the responsibility of accepting or using content onto the end user's lap. This gives the consumer the prerogative to determine what he/she wants to accept / use. They have the right to ignore those things which they deem wrong or fallacious. It also helps when the reputation of any content provider in the community who contributes content which are 'garbage' drop drastically in the eyes of the community itself.
The long and short of it, is that the community takes care of itself, with democracy having the final say. You can't really force everyone to believe you're right when the whole world accepts that you're not, and you wouldn't be very wise to try.
So, is there really a big problem with Web 2.0? Perhaps my views are a little too optimistic or I might be overlooking some other aspect of it. I would be the first to admit that Web 2.0 isn't perfect. However, it seems to be a very solid platform and I'm more inclined to believe that in it. Like it or not though, it's here to stay, at least for a couple of years.
Technorati Tags: Web 2.0, problems, internet, web-based communities
For starters, Web 2.0 refers to a change in the internet scene. Previously, it was all about content providers which supplied data in one direction; regular people accessing the internet for data got it from the 'source'. Web 2.0 however, places the ball in the court of the average internet user. It's a community based approach, and 'content' can be provided by any user. While Web 2.0 is referring specifically to the internet and the web-based communities as well as hosted services that surround it, there's also a 'broader perspective' with which one can view this strange phenomenon.
In the beginning, end users had to wait for the source to produce something before they could use or consume it. This happened just about everywhere, because unless manufacturers produced a brand new product, end users or consumers would not be able to see it. Let's say that as an end user I suddenly have this brilliant idea, but unlike the established manufacturers, do not have the ability to produce it. I could try to sell the idea to the manufacturer, but even then, it was technically still a one way flow of the ideas, with the manufacturer again being the one having the responsibility to produce better ideas and products. With this new community based approach, the end user or consumer community can effectively contribute towards an idea or project. This is the core for which Web 2.0 based software and products work on. Websites like Digg and Wikipedia rely on the collective community intelligence instead of just one person or entity.
The book I read basically focussed on one important aspect of Web 2.0 and expounds on why it is potentially problematic. The book gives an example of blogging, where every end user is now a journalist, without qualifications possibly, writing whatever they want about any topic out there. The problem is that with so many sub-standard blogs out there, one has to crawl through all the garbage in order to find the one or two rare gems that are actually good blogs worth the time reading, or so the book claims. It equates this to giving an infinite number of monkeys an infinite number of typewriters, and finding, finally, a Shakespearean piece somewhere.
While I can see where the author's coming from, there is a very significant difference that Web 2.0 has from that analogy. The main complaint seems to be that a very high percentage of those who contribute content in a Web 2.0 platform contribute rubbish. Substandard journalism or ideas without any verification plague the sea of information and without a body to determine the quality or validity of this content, it is inherently garbage. This seems like a valid complaint, until one takes into account the fact that this 'body' which determines whether or not the content is of a certain standard, consists ALSO of monkeys. Putting it into the monkey analogy, whether or not a 'product' is of Shakespearean standard would be determined by a monkey, or an entity comprising of monkeys as well. Furthermore, the only reason Shakespeare's pieces are recognised to be of high quality, is because the MAJORITY of consumers deem it to be so.
The fallacy here is to assume that group intelligence, that is the collective thoughts of the community, is lower than that of an exceptional individual, or group of individuals. While this may appear to be true, the Web 2.0 communities are by and large, scientific, inquisitive people who question everything critically and analytically. This very important trait makes the group intelligence actually much better than what any individual or smaller group can achieve.
This group intelligence is NOT the same as the mindless 'herd mentality' projected by certain groups. Ignorance is at the heart of this apparent stupidity of the group; They accept everything shared wholesale without critically thinking through each and every issue involved. When group intelligence is coupled with this kind of thinking, a robust platform for quality content is established.
The book expounds further about how Wikipedia is essentially a huge fallacy because there is no neutral governing body determining what goes in and what doesn't. The claim is that since any average user can make changes to it, it is therefore unreliable and 'garbage'. This is not quite true, because the MAJORITY of people listen to facts and reasons, not blatantly accept what is shown. Having millions of people patrol it makes it more reliable than any quality control 'body' can ever establish. Sure, you could try to change an entry in Wikipedia, which may be motivated by biasness, but who is to say that having a 'governing body' determine content would absolutely ensure that there would be no partiality or fallacies in the content? In the end, any unproven 'fact' would be viewed skeptically, and any verifiable fact would be accepted as mainstream, further increasing the collective intelligence.
Assuming in the worst case that collectively, people believed the 'wrong' thing, and evidence proving the contrary is found, then collectively, the community would 'learn' and adapt, because again, it's not about the size of the group, but the level of critical thinking that determines the intelligence level of a community.
Web 2.0 is an online realisation of democracy, where the majority has the right to determine and share content. The truth is, low quality content would be largely maligned and forgotten in the huge mass of other more interesting, quality content, contrary to what some people would like to believe. Sure, there are asinine content which interest a lot of people and become popular, but end users are not as stupid as to believe that every popular news or story or content is one with significance or of great quality. Sure, there are some important content which may slip through the cracks of the democratic system and remain buried never to be found. However, it is by and large a better system which promotes more quality content than a singular directional system. Plenty of established content providers have produced crap content before, and there's no doubt that plenty of average users managed to produce quality content.
Undoubtedly, having a governing body would impose responsibilities on them which need to be taken seriously, something that is largely ignored in the Web 2.0 platforms. This issue however, can be looked at from a different perspective. It is now throwing the responsibility of accepting or using content onto the end user's lap. This gives the consumer the prerogative to determine what he/she wants to accept / use. They have the right to ignore those things which they deem wrong or fallacious. It also helps when the reputation of any content provider in the community who contributes content which are 'garbage' drop drastically in the eyes of the community itself.
The long and short of it, is that the community takes care of itself, with democracy having the final say. You can't really force everyone to believe you're right when the whole world accepts that you're not, and you wouldn't be very wise to try.
So, is there really a big problem with Web 2.0? Perhaps my views are a little too optimistic or I might be overlooking some other aspect of it. I would be the first to admit that Web 2.0 isn't perfect. However, it seems to be a very solid platform and I'm more inclined to believe that in it. Like it or not though, it's here to stay, at least for a couple of years.
Technorati Tags: Web 2.0, problems, internet, web-based communities
Powered by ScribeFire.
Posted by
Gerald
at
8/13/2007 04:26:00 PM