听introduced her main concerns with the web's effect upon human being's adaptable brains and behaviour at the Web at 20 event, asking some of the challenging questions that feature in the developing themes of programme four - is the web changing us?
(You can also read the transcript of the video below.)
In order to see this content you need to have both Javascript enabled and Flash installed. Visit 叠叠颁听奥别产飞颈蝉别 for full instructions
So, do you feel that the perfect plasticity of your brain is being moulded into a more infantile state by the constant 'yuck' and 'wow' of the web?
Transcript of Susan Greenfield's speech:
And the question I want to ask is not so much what can we do with the web, but rather what will it do to us.
So what fascinates me very much is if human beings, occupying as we do more ecological niches than any other species on the planet, do so because we adapt so brilliantly to the environment; if that environment is about to change, as I think everyone in this room seems to agree it is, then will the brain change in unprecedented ways too? And I think we need to sit back as a society and consider this because on the one hand like with all technology it can be good, and on the other hand I think it could be very bad if we get it wrong.
We might be entering a world that is more sensory than what we would traditionally call cognitive. By definition you have to have something on the screen. Very few, to the best of my knowledge, very few experiences of the screen are just reading words off a screen. So what does sensory images, sounds, what do they do that books do not do, and vice versa?听
Of course I don't want to give a value judgement. You have to ask, is it more important and interesting to have a here and now experience, to have a process, to have the thrill of solving an abstract problem versus the rather lack lustre sensory-poor notion of turning pages for example in a rather clunky way, but having something that actually changes the way you will see the world in a much more perhaps deep or extensive way. I'm not saying that that won't happen if you're on the web, say, but I'm just thinking about young brains exposed to different experiences. And it could be that the multimedia, the sounds and the colours and experience, is so great that that becomes the premium - 'yuck!' and 'wow!' - 'yuck!' and 'wow!' And the short attention span, the thrill of pressing a button and seeing something back in your face, is very different from a long attention span, as you plod through following the author, holding their hand in a sort of slavish way.
One of the most important issues I think, as well as the good thing about IQ going up, is the issue of risk. Obama said that the current financial crisis is attributable in part to greed and recklessness. Now greed are recklessness occur as part of something called a frontal syndrome, when the frontal part of the brain is less active in various conditions.
Could it be - and also this frontal part of the brain only comes on stream in late teenage years - could it be, given the brain is so obliging in the way it adapts, that if you're putting it in a situation where you are living for the moment in a rather infant-like way with lots of sensory experiences, that that could be being changed? And I think that's one of the things that would be very interesting to look at.
My final issue is identity, and it does stun me, Twitter for example, where the banality of some of the things that people feel they need to transmit to other human beings. Now what does this say about how you see yourself? Does this say anything about how secure you feel about yourself? Is it not marginally reminiscent of a small child saying "Look at me, look at me mummy! Now I've put my sock on. Now I've got my other sock on," you know? And I'm just being neutral here, I'm just asking questions, right... What does this say about you as a person?
Comment number 1.
At 11th Sep 2009, SheffTim wrote:The quote above doesn鈥檛 do justice to Greenfield鈥檚 views. Some of the results here are more substantial:
Complain about this comment (Comment number 1)
Comment number 2.
At 11th Sep 2009, The Phazer wrote:Please tell me you'll be getting in a proper neuroscience to go to town on Susan Greenfield's complete ignorance of the scientific method. For instance, her quote "Of course I don't want to give a value judgement" is simply a lie. She has repeatedly made value judgements in public, and is highly biased - her claims about being netural above are laughable.
Heck, you don't even need a specialist - even the likes of Ben Goldacre would absolutely destroy Susan Greenfield in a debate.
Phazer
Complain about this comment (Comment number 2)
Comment number 3.
At 11th Sep 2009, Mo McRoberts wrote:You may want to have a word with Ben Goldacre about these claims:
Complain about this comment (Comment number 3)
Comment number 4.
At 11th Sep 2009, SheffTim wrote:To approach the topic of if and how the digital revolution could be affecting our society from another angle - a recent headline from China.
China Bans Online Gangster Games
On the grounds that 鈥淭hese games encourage people to deceive, loot and kill, and glorify gangsters鈥 lives. It has a bad influence on youngsters,鈥
Could immersion in an amoral or immoral virtual world make impressionable minds more likely to take on those characteristics of the characters they play?
Complain about this comment (Comment number 4)
Comment number 5.
At 11th Sep 2009, Dave Cheadle wrote:It frightens me that this person is the director of the Royal Institution. Science is the observance of phenonema and the testing of explanatory theories for those phenonema through controlled experimentation. I respectfully submit that calling Twitter "banal" on the basis of a made-up anecdote about socks and thereby chewing the same cow's cud as scaremongering stories from the popular press hardly qualifies as noteworthy.
Complain about this comment (Comment number 5)
Comment number 6.
At 11th Sep 2009, cyberissues wrote:Hopefully I will be able to do a full critique soon, but in the meantime, this article is interesting:
Complain about this comment (Comment number 6)
Comment number 7.
At 11th Sep 2009, Justin Pickard wrote:I can only agree with @paulcarps, @nevali and @The Phazer.
Sure, there's probably something of value here - I know my attention span and ability to focus has probably been adversely affected by overexposure to the net. But that doesn't alter the fact that Greenfield's rhetoric is simplistic and hysterical, at best. I mean, in one of her recent columns for Wired UK, , ferchrissake.
Her comments on Twitter are totally missing the point - it may be banal, but it's not attention-seeking. It might be hard for a neuroscientist to see, but we're looking at patterns of .
If you want to address these kind of issues in the documentary, you're better off with people like and .
Complain about this comment (Comment number 7)
Comment number 8.
At 11th Sep 2009, TaiwanChallenges wrote:I think this speech ignores the fact that we're hard-wired for yuck and wow. We evolved in a world where the immediate was pretty much all that mattered. It's only in the last few thousand years that humans have had to think in more organised ways, and we haven't evolved at all since a long time before that. So, in a sense, the instant gratification we seek online is a kind of return to our natural state.
We might argue that these values/responses are not adequate for the complex world we live in today, but that doesn't mean that the web is 'changing us' - it means that we're behaving more naturally than our cultures try and train us to be in order to survive.
Here's a silly theory of my own, which someone else may have come up with previously but I've never heard: If you read extracts from old texts they don't "make sense" in the way that modern writing does. Anything predating the invention of the rational scientific method (the Renaissance?) appears to be heavily right-brain influenced - much like the stuff my Asian students produce today. If so, then the modern left-brain, analytical approach to language and thinking which dominates in the west is an anomaly. Most people that have ever lived, and most that are alive today, are uncomfortable with the kind of thinking that Baroness Greenfield seems worried about us losing. Lots of holes in this theory, I'm sure, and I'd love to hear where I'm wrong or right from better-informed people.
Also, "I'm just being neutral" - not! I agree that this doesn't read like an informed or balanced opinion. If you believe that we adapt brilliantly to changing environments, and the environment is changing, why worry that adapting is bad? She's not making sense. I'll never understand why we let politicians get involved with policy-making.
Complain about this comment (Comment number 8)
Comment number 9.
At 11th Sep 2009, A_PERSON_NOT_A_BOT wrote:Ai-yo..............
"........as you plod through following the author, holding their hand in a sort of slavish way." --- Homer, Shakespeare, Chekhov, Voltaire, Kong Ming, Ved Vyasa, Luo Guanzhong et al are LOL'ing in their after-lives, thinking of us "holding their hands in a slavish way". The last time I checked none of those authors was alive for us to sing Beatles' 'I wanna hold your hand!" at them, :*).
Professor Greenfield should also note that the sale of Kindle books is INCREASING HEALTHILY amongst young readers. They're simply swapping the paper format for the digital one --- or even buying both. If she was a technologist herself rather than a theoretician, she'd be aware that there is amazing work being done at Carnegie Mellon University where the next generation of kids are getting tools to enable them to write and publish digital books with embedded streaming videos and every other InDesign publishing tool that an editor at a Cond茅 Nast publication has access to.
"Now greed are recklessness occur as part of something called a frontal syndrome, when the frontal part of the brain is less active in various conditions." --- Hmmn, it's not about the QUANTITY (more or less) of an activity which determines human risk taking. It's the QUALITY of contextualization and ability to calculate consequence costs.
Wrt Twitter and its banality --- Professor Greenfield might like to comment on the @welovethenhs group that was created, to counter US Republicans bad mouthing the NHS, the breaking of the story about 'Miracle on the Hudson' when Chesley Sullenberger safely landed a faulty plane on the river, the coverage about the Iranian elections, and more.
I don't Twitter, but I do understand its technical relevance, applicability and potential. Imagine if the people in the Asian tsunami of 2004 had been able to receive warning messages via an IM technology like twitter (to their PCs, mobiles or other devices --- even gaming consoles). The ones who got those IMs would have been able to inform those who weren't online/on-mobile. More people would have left the area or been evacuated. More of people's lives could be saved.
Also, a lot of the twitter streams are links which lead to serious content (if that's what people are interested in). 'Nature' and 'New Scientist' material is linked there as are scientific papers on technology, recipes, how-to guides, etc.
I understand Twitter because during Web 1.0, I created a corporate information and collaboration hub which had a P2P IM facility. Yes, some of the streams were comments like "Gotta go grab a coffee." Nonetheless, the fact that we could slingshot critical information from HK to NY to Paris to Capetown to Shanghai etc meant that people working on the same project felt socialized, included and on the same page.
It would be good for Professor Greenfield to go into Google or Microsoft or other major techco and experience for a period of time how the Web REALLY affects us, rather than extrapolate it from "fuzzy frameworks".
Also, the patronization is seriously objectionable. How is their interest in what's hot "yuck" and "wow" NOW different from what happened during the Swinging Sixties and the "yuck" rejection of prudishness in favor of the "wow" of Elvis? And yet those kids from then grew up perfectly reasonably to become the very people who are responsible for our scientific institutions, our media, our governments and other social institutions now. Presumably the people postulating and worrying needlessly about teens, their minds and their behaviors were teenagers themselves once and have an attention span sufficiently robust and long to remember that?
:*).
On my part, I briefly caught the conversation of 2 teenagers on a bus earlier. They're way more sussed about what's "yuck" and "wow" and worthy of their attention spans than some adults give them credit for! Yeah and they weren't busy texting either. They were talking about starting college and how the girl at the back is the cousin of someone in the year below.
Sometimes, I wonder if the theoretical adults actually go out and about and ENGAGE with kids themselves. Or if they themselves are as guilty of being as disintermediated and as disinterested as the very technology that they postulate is potentially problematic.
Complain about this comment (Comment number 9)
Comment number 10.
At 11th Sep 2009, A_PERSON_NOT_A_BOT wrote:Oops, missing words: Also, the patronization OF KIDS is seriously objectionable.
Complain about this comment (Comment number 10)
Comment number 11.
At 11th Sep 2009, SheffTim wrote:Nicholas Carr is on Molly's list of: 'People who say interesting things' (previous post); to broaden the discussion out from Susan Greenfield's views it's worth introducing Carr to those that haven't come across him.
Is Google Making Us Stupid? by Nicholas Carr
"Over the past few years I鈥檝e had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn鈥檛 going - so far as I can tell - but it鈥檚 changing. I鈥檓 not thinking the way I used to think. I can feel it most strongly when I鈥檓 reading. Immersing myself in a book or a lengthy article used to be easy..."
Discussion on the above (It was widely syndicated and commented upon):
3F
Jamais Cascio wrote a reply:
'Is Google actually making us smarter?'
Stan Schroeder wrote 'Why Google Is Making Us Dumber'
'Google is so good at retrieving information, that we don鈥檛 bother to remember anything anymore.'
Emily Yoffe wrote:
'How the brain hard-wires us to love Google, Twitter, and texting. And why that's dangerous.'
Mindhacks replied to Yoffe's arguments about email, tweets etc stimulating dopamine 'rewards' that lead to addictive feelings
(DRev: Might be worth adding the Mindhack folk to your interview list:)
This also is of interest:
How The Internet Affects Your Brain
'The energy needed to obtain information when required has diminished to almost nothing, so why bother learning stuff?'
For older people learning to use the Web:
"emerging computerized technologies may have physiological effects and potential benefits for middle-aged and older adults,"
And Twitter: I'm sure it could be useful in a widespread crisis, but otherwise:
"A short-term study of Twitter has found that 40% of the messages sent via it are "pointless babble.""
Complain about this comment (Comment number 11)
Comment number 12.
At 11th Sep 2009, leoncych wrote:Wow - an excellent exercise in PR over scientific method or argument.
I do wonder if BSG actually knows any teenagers - I think many of us seeing that snippet who have teenage children might be a little bemused. My son is a straight A* student and he also reads two or three books a week. He also plays a lot of Wow and Yuck computer games.
The comment about Twitter is seriously disingenuous - you cannot set up an anecdotal statement like that and withdraw into the poverty of - it's merely a reflection. We are in serious Daily Mail territory here...
This speech appears to owe more to sophistry than actual fact.
Worldwide I know over a thousand educators who are using Twitter in a highly defined, productive way. For the most part young people don't use it so I doubt we need have concerns there.
I think it would be interesting to have a debate between BSG and danah boyd or Henry Jenkins - now THAT I would love to see...
On issues of identity I would draw people to the ongoing research at Reading University's 'This is Me' project:
Complain about this comment (Comment number 12)
Comment number 13.
At 11th Sep 2009, leoncych wrote:You may also want to explore the 'V' generation as outlined in this blog post about teacher and students' use of Virtual Worlds.
Particularly the video interview I did with Vicki A Davis - here is what teenagers are doing productively with that platform as opposed to theories about it.
I think the teenagers I interviewed there seem to be evolving perfectly normally from what I could see - their engagement with education and other cultures was inspiring :
Complain about this comment (Comment number 13)
Comment number 14.
At 13th Sep 2009, A_PERSON_NOT_A_BOT wrote:Here's an observation, a paradox and a concern wrt Twitter.
Those of us who actually work (or have worked) in technology share a common aim: make that technology as widely available and easy to use as possible. The impetus for that is a varying complex of economic gain (e.g., Google Search), altruism (e.g., OLPC) and/or democratization (e.g., Linux and any other Open Source framework or CC content). Twitter is gaining traction because its barriers of entry are so low:
* sign-up is easy to do
* there's only a requirement to write up to 140 characters
* status update is disseminated rapidly, near real-time
Now, here's the thing. It's got a 140 characters limit and has no WYSIWYG editor which would allow us to readily insert functions, derivatives, chemical symbols, superscripts to indicate factorials etc. so.................CAN WE SERIOUSLY EXPECT USERS TO BE POSTULATING FERMAT'S LAST THEOREM OR SOLVING THE REIMANN HYPOTHESIS or anything of serious concentration and distillation THERE?!
Answer: er.....NO.
That's not the purpose or orientation of Twitter in the first place. The 140-character limit affects (or edits) what people can and should post: links to long articles and brief updates of the type that Paris Hilton posts.
One of the few equations which would fit into 140-characters would be E=MC2. The fact is that for the "clever stuff" there's other technology available like: Deep Blue, the upcoming Google Squared, Wolfram's Mathematica or MOE for the Life Sciences.
Twitter is for the regular people's CHAT stuff. Instant messaging is also known as a "chat channel" for a good reason. It's about chat --- not conversation or oral discourse of the profundity found in Plato's Republic, the complexity in the Senate wrt the US healthcare reforms or whatever elucidations occur in Professor Greenfield's Royal Institution.
As an associated point which highlights the paradox of this situation: according to the Pear analysis quoted in that link ( about 40% of content on Twitter being "pointless babble" or as Professor Greenfield calls it "banality" --- "I'm eating a sandwich" / "I'm putting on my socks" --- it's interesting that the parameters for what constitutes smart, of value and of interest are set and governed by a particular academic elite rather than say a majority. It would be interesting to conduct an online survey and ask the millions of Twitter users themselves whether they regard their updates as banal and what value they would assign to those updates.
After all, aren't we supposed to be striving towards a Web which is FREE + OPEN + ACCESSIBLE TO ALL (and not only those "like us")?
The concern about the over-emphasis on the supposed evils of Twitter and our attention spans is to do with policy. People like Professor Greenfield with theories like hers have some influence over government policy on digital matters because of the brain and education connection. Interestingly, President Obama's Science and Technology Advisory Council is comprised of these people:
*
We'll all note that Eric Schmidt and Craig Mundle (CEO of Google and Chief Strategist of Microsoft, respectively are there in conjunction with experts on renewable energy, human genomes, superstrings and nanotechnology. We'll also note that the US environment for technological innovation on the Web exceeds the UK's --- both in terms of actual development work as well as financing.
Even the American approach to social commentary on tech developments is oriented differently. They don't approach it from, "This is what's wrong with it. It's banal. It's making our brains XYZ." They approach it from, "These are the fascinating, innovative aspects. Here's what's missing. Here's how it can be better."
Compare and contrast the Greenfield and Boyd methodologies. Factor in also the generational difference element.
No technologist I know imagines that Twitter is the be-all-and-end-all of technological civilization, It is merely a tool in an ENTIRE EMERGENT ECOSYSTEM OF E-INTELLIGENCE ENABLERS. Yes, our brains will be wired differently but it's unlikely to be anything as dramatic as us all losing our attention spans. More likely that we'll be able to filter, cross-connect and discern value from content sources in a more contextualized and democratic system.
It will be like that scene from 'The Minority Report' --- albeit it will be the many and not just Tom Cruise's character. We'll be able to call up immersive holographic files from "The Cloud", apply motion sensor technology to flip through records, simultaneously display 3D rotations of connected files, contextually filter and pinpoint with ATTENTION and focus precisely what we need.
If anyone thinks, "Oh, purlease, that's just the movies!" please take a look at what Microsoft has planned for 2019:
*
Also, become more informed about the Semantic Web, immersive collaborative environments (look for US sources there are no UK ones), haptics, 3D technologies like Google Sketch-Up and Viewer and more.
Once this 360 is gained, applying 2020 will make it crystal clear that Twitter and its siblings in socnets are but a contributing node --- not the entire cortex --- in the wiring of the Web and the potential wiring of our brains.
Complain about this comment (Comment number 14)
Comment number 15.
At 13th Sep 2009, A_PERSON_NOT_A_BOT wrote:Just to be cheeky, here's my 140 of banality:
GOING FOR A SWIM ON A SUN! MAYBE WILL FLOAT + FINK.
Complain about this comment (Comment number 15)
Comment number 16.
At 13th Sep 2009, A_PERSON_NOT_A_BOT wrote:Yes, I can spell think, :*). Fink's the junction of these crossroads:
--- colloquial teen slang for "think".
--- the Fink Team wants to bring the full world of Unix Open Source software to Darwin and Mac OS X.
--- 'Barton Fink' by the Coen brothers about an intellectual NY playwright with an idea about the "Theater of the Common Man".
--- Stanley Fink (a godfather of hedge funds) and Larry Fink (CEO of Blackrock which manages in excess of US$2.0 trillion) who are financiers, took risks on behalf of their client companies and contributed to increases in value for those companies.
A bient么t, hasta luego, 鍐 瑙, ciao as they say!
Complain about this comment (Comment number 16)
Comment number 17.
At 13th Sep 2009, Dan Biddle wrote:Excellent comments, thoughts, links and ideas as ever. I'm entirely grateful for being introduced to (among the many items I've hitherto not considered before) the idea of the 'phatic' communication. Never heard that term before. Likewise the notion of a 'Teacherpreneur' from .
I've been away this week (writing this on the train home), and seem to have missed one of the best weeks on the blog in the process. That'll teach me. ;)
Regards mention of Nicholas Carr, @SheffTim will be pleased to hear we should be uploading a blog by Nick this coming week, so watch this space. Meanwhile, those links to other articles around this debate are much appreciated.
Until a new week begins... Many thanks.
Complain about this comment (Comment number 17)
Comment number 18.
At 13th Sep 2009, Dan Biddle wrote:Ok. I can't stay away.
@Leoncych - this link doesn't seem to work, could you please check and re-post?
Baroness Greenfield might wonder at . "Look, mummy, I've dug the potato patch; now I've checked out the chickens..."
40% of Twitter communication is banal 'pointless babble'? Isn't that true of most communication? I'm sure even the greatest seers, orators and writers, like Elvis, have to put their trousers on one leg at a time, and, like the President of the United States, 'sometimes must have to stand naked'.
Complain about this comment (Comment number 18)
Comment number 19.
At 13th Sep 2009, SheffTim wrote:I stir the pot occasionally just for the sake of it, I think it's too easy to be swept away by the 'Wow isn't the Web amazing' factor; it diminishes critical thinking - can anything be that perfect? Is there no downside at all?
I take the point about 'phatic' communication (small talk) being necessary to maintain human relationships, (FB status and comments threads serve the same purpose, but with a smaller, more targeted groups of people), if I have a concern it would be about best use of productive time and energy.
Information overload isn't a topic that's been raised much; but it links with previous DRev threads about how people can decide what information can be trusted, how it can be best filtered to find the most relevant and so on. (As Person Not A Bot put it:" More likely that we'll be able to filter, cross-connect and discern value from content sources in a more contextualized and democratic system.")
I suspect there will be (or already is) a well-educated, technologically literate elite that will be able to do a fairly good job at it, bur the rest (majority) will struggle with the task. (And the gap between 1st and 3rd worlds grows ever wider.)
The links leoncych posted were most interesting as to where next-gen technologies will lead us; both in education and leisure. Particularly impressive was the Project Natal Milo demonstration of interactivity with a virtual character. (I've put a youtube link to the same video below)
It does raise the thought of whether we'll end up with virtual 'best friends'? Always there for us, never critical etc. There's scope there for some good SiFi stories; eg if a child doesn't get on with their parents could they find better virtual ones? (The same could apply to adult relationships.)
The Learn for Life blog is good; I hadn't come across it before. It's bookmarked.
What will be interesting is whether virtual worlds will be able to come off the computer screens via 3D holography. One demonstration of 3D holography that impressed me was used effectively at a fashion show. Movie CGI can do this of course, but being able to project it convincingly into a real-life auditorium is impressive. Who knows where this might lead to?
That and 'Augmented Reality' holographic toys. Be interesting to see if they maintain children's interest for as long as a real, physical toy does?
Once you get over the 'How do they do that?' fascination there doesn't seem to be that much you can do with them. Again though, it's early days.
Complain about this comment (Comment number 19)
Comment number 20.
At 13th Sep 2009, A_PERSON_NOT_A_BOT wrote:@SHEFFTIM --- "I suspect there will be (or already is) a well-educated, technologically literate elite that will be able to do a fairly good job at it, bur the rest (majority) will struggle with the task. (And the gap between 1st and 3rd worlds grows ever wider.)"
Yes and this is why I'm developing my democratizing 360-2020 perception and values tool: AS A MEANS TO ADD SMARTER CONTEXTUALIZATION FILTERS ONLINE FOR......EVERYONE (not just the tech literate or the 1st world).
There is some buzz that the data structuring afforded by the Semantic Web stack, ontologies and protocols will provide this contextualization. From the SemWeb applications and services I've tested to date, they won't. At best, online objects will be better classified so that Paris is semantically tagged and differentiated with associated relationships in RDF either as:
* Paris, location, capital city of France, latitude-longitude marker
* Paris, location, city in Texas, US, latitude-longitude marker
* Paris, person (historical), Homer, The Odysseus, Prince of Troy
* Paris, person (live), Hilton, celebrity, Hilton Hotels heiress
* Paris etc.
Whilst this is undoubtedly a gigantic and ambitious leap in the right direction, spearheaded by Tim Berners-Lee and the W3C, there are still contextualization filters that can be created to really extract what we MEAN, over and above the current definitions of what the "semantic" in the Semantic Web should enable us to do in terms of discern between and relatively connect objects on some sort of Social Graph.
Stepping aside the problems of scaleability in RDF and server loads, there are still tools to be imagined and built.
What does Paris really mean and what are the individual frames of reference we each have for it? Sure, it could be as defined by the Semantic Web stack but it is also these classifications which are not defined by the ontology tools:
* romantic
* expensive
* on trend (as per Paris Hilton)
* intellectual
* Rive Gauche chic
Anyway, as I wrote before, this year I finally had my "Eureka!" moment. When I'm not here, sanity-checking the likes of Professor Greenfield's hypothesis, adding a female perspective amongst you all and making requests to the 大象传媒DigRev team, that's what occupies my neurals and time.
3D HOLOGRAPHY + AUGMENTED REALITY
---------------------------------
The place to look for what the future holds is in medical developments and surgical training using AR:
*
See also Holographic Google Earth:
VIRTUAL PETS + ROBOTICS
---------------------
The virtual pet we've already seen examples of in terms of the Tamagotchis (
Here's an interesting passage from a 2000 Brand Strategy report by Anna Morton which puts the whole "Are digital toys and stimulus making our kids less or more intelligent?:
The global market for toys and games grew by almost 25% between 1994 and 1998 in spite of recent recession in some key markets. Leading the growth were 'must have' toys such as Teletubbies and Furbies from Hasbro and the Sony PlayStation. Intelligent toys, interactive toys incorporating a microchip such as Bandai's Tamagotchi 'virtual pets' have made a considerable impact on the market, often at the expense of the activity/construction sectors.
This contrasts with an article written in 1997 entitled "Is this toy a threat to our mental health?":
*
This is why it's interesting that 12 years on Twitter seems to have replaced the Tamagotchi as the pet topic of neuroscientists concerned for the mental well-being of kids.
Unfortunately, this 大象传媒DigRev isn't about robotics and AI too or I'd link to Asimo and the robots being planned as healthcare assistants in Japan.
Complain about this comment (Comment number 20)
Comment number 21.
At 14th Sep 2009, A_PERSON_NOT_A_BOT wrote:Returning to Professor Greenfield's video and position........I have two suggestions for 大象传媒DigRev team:
(1.) Please can we have a 'Question Time' style panel and debate on this issue with some of the experts proposed by commentators here and an audience comprising a representative proportion of teenagers?
(2.) Please can we have a segment of some teenagers doing a show+tell about which technologies they DO actually like and engage with as different from adult postulations of what they think they're into?
Thanks.
So........some swim on a Sun, some finking and some food later, here's some "Bigger Picture" perspective along with some specifics. This is my non-value judgment:
(1.) Unfortunately, Professor Greenfield isn't as "in the know" about teenagers and Twitter as she could be. This is because her sources of analysis may not include business, techblog or teen perspectives:
* According to June, 2009 comScore, 11.3% of visitors to Twitter.com in the U.S. are ages 12-17. Internationally, in May, 2009, only 4.4% of visitors were younger then 18.
*[Unsuitable/Broken URL removed by Moderator]
* According to Nielsen in July 2009: Perhaps even more impressively, this (Twitter's) growth has come despite a lack of widespread adoption by children, teens, and young adults
*
So we don't need to fret needlessly about how Twitter is disrupting the brains of teens or arresting their attention spans. Twitter is not one of their online tools of choice.
(2.) Clearly, we need some Tweetable equations for Generation EI (EI=electronic intelligence, pronounced "eye"). Here's an example.
鈭 TI = 鈭 AI
----
鈭 I (A subscript)
where 鈭員I is the increase in teenage intelligence.
鈭咥I is the increase in INFORMED ADULT INSIGHTS.
鈭咺(A subscript) is the decrease in inappropriate adult interference.
By all means the scientists should examine the Web's potential dangers as well as its advantages. They should apply Descartes' reductionism, play Devil's Advocate with subjectivity, put it through the prisms of Ptolemy, mix it up with a healthy viral of Darwin and Mendelev and seek to synergize it all in some sort of Hawking's Grand Unifying equation.......
Just PLEASE do it in an intelligent, inclusive, informed, impartial and imaginative way. The reputation of the Royal institution and the associated policies on education, scientific investment and economic opportunity with technology depends on this.
Now, one of my American friends --- of Professor Greenfield's generation --- noted that for some people Twitter is the equivalent of online graffiti. Fair enough, Banksy is not to everyone's tastes compared with the Old Masters like Titian, the Impressionists like Monet, the Surrealists like Dali, the Cubists like Mondrian or the Modernists like Picasso. Nonetheless, "online graffiti" as a media form IS accessible and it does increase democratic choice.
Tying it in with anthropology, what's interesting is that our ancestors scribbled / etched / graffitied on walls to leave us some mementos as well as simply decorate their cave dwellings and mark out its territory in ownership rights. For us those are reminders of them, their societies and their lives.This happened in caves, in temples and in other structures throughout history and across different lands.
The "Wall" is now replicated on Facebook and, frankly, any site with a comments panel we can type into. Okay, so how VALUABLE is that Neanderthal or early Homo Erectus graffiti / shorthand / scribbling to OUR present-day understanding of human intelligence, societal development and civilization? Likewise, will some of the streams on Twitter be rediscovered by our progeny of C30th and help them to gain more insights on where human intelligence, societal development and civilization was in the C21st?
Probably.
Oh, and let's wonder for a moment whether there might not have been people in the cave taking offense to the etchings and trying to stop them from being expressed. Imagine what would have happened if they'd succeeded. We would know LESS not more about human intelligence, etc.
As for Professor Greenfield's comment: "... Is it not marginally reminiscent of a small child saying "Look at me, look at me mummy! Now I've put my sock on. Now I've got my other sock on." I hope when I become a parent I don't patronize my child by jumping to some nonsensical conclusion that they must be needy, vain or egotistical when all they're doing is providing me with practical information about their state of sock-on or otherwise.
Anyway, if someone was ever to tweet "Putting my socks on" at me I certainly would try not to apply value judgments about their personalities or their sense of self / insecurity or otherwise. My associations would be either:
* Maybe they're cold. Putting on their socks will warm them up.
* Perhaps they're about to put on some shoes and head out, so they may be going offline for a while.
* I wonder if those socks are the ones she knitted last week / month / year.
See? Don't jump to judgments about the twitterer's ego. It's better to seek out the context of what they're communicating.
In closing, I just want to say that during the Age of Enlightenment in the C18th the UK's scientific brilliance shone (Ada Byron, Charles Babbage, Charles Darwin, Edmond Halley, James Clerk Maxwell and Sir Isaac Newton). Today we are on the verges of a new Enlightenment in terms of not only being able to apply reasoning to scientific methodology but also harnessing technology to cross-pollinate sources from different disciplines in the quest for better solutions.
Hopefully, the UK's brightest scientists will shine the lights onto the right spots and smart insights.
Complain about this comment (Comment number 21)
Comment number 22.
At 14th Sep 2009, Leon Cych wrote:The trouble with Augmented Reality is that you end up with things like Top Trump AG cards - great for kids but educationally rather facile and it has limited appeal almost a Wow and Yuck factor if you want.
The trouble with Milo, also, is that that film is carefully set up and scripted - I doubt he would pass a Turing test if anyone deviated from the script.
The problem with Edutainment is that is doesn't have enough deep learning. If you want to see deep learning with a lot of research around it then go to the Consolarium games unit blog in Scotland where they have evolved a much better way of using commercial products in education.
The key being binding it in closely to the communities involved - look up their use of Mario Kart and Guitar Hero (yes seriously!) on there, to see how it should be done - more WOW than Yuck I think with academically researched proven outcomes... :
The work of Derek Robertson and Ollie Bray there in particular:
The key is, of course, binding communities into these technologies - making them authentic and engaging AND effective in terms of teaching and learning.
are doing some interesting work on AR on phones as well - and guess what - it's all based around community...
Of course all these people are on Twitter and are part of the growing TeachMeet movement which uses Twitter and many other Web 2.0 tools to spread the word around the teaching community.
I am filming TeachMeets for FutureLab and their map of innovation project would also be worth looking at.
Here is the TeachMeet Talks channel - I will have over 50 films on there by year end:
A lot of these teachers are bound in by the use of Twitter (usually connected via an ad hoc network of smart phones) they are a very organised bunch of people using Twitter to maximum effect.
These are well thought out pedagogies involving, yes, Twitter in some cases - they are still evolving a whole new way of working in this area -
and here in Science:
see - people in education have been doing this for some time...
So when BSG talks of Twitter and socks I just think of this inspiring network and wonder how ignorant someone can be of the very, very sophisticated use of Twitter...and this is just one area - I could cite endless other examples in the business, health, legal and other professions...
The interesting thing about Twitter is that it is a Global community and different groups are beginning to hook up across silos and disciplines forming Ad Hoc personal learning networks it's like a coral reef and will eventually join up and step change will take place.
Baroness Greenfield needs to look a little closer I would suggest...
Complain about this comment (Comment number 22)
Comment number 23.
At 14th Sep 2009, A_PERSON_NOT_A_BOT wrote:As I noted before:
Also, become more informed about the Semantic Web, immersive collaborative environments (look for US sources there are no UK ones), haptics, 3D technologies like Google Sketch-Up and Viewer and more.
Once this 360 is gained, applying 2020 will make it crystal clear that Twitter and its siblings in socnets are but a contributing node --- not the entire cortex --- in the wiring of the Web and the potential wiring of our brains.
Here is a SQUEAK/TWEAK environment partly architected by Alan Kay. Yes, THAT Alan Kay --- Apple genius, Turing Award winner and one of my personal heros:
*
When I have time, I dial-in to the development call of the guys building the environment. The project leader knows me because I managed to beat their chess algorithm in just over 50 moves; the first time anyone did that. So he asked me to become involved.
There is an IM facility, developed on Jabber, which is similar to Twitter. That's not what's important, though. What's important is that it's an OS, a browser, a UI, a code editor, a site, an IM and more all-in-one and it's going to facilitate the education and creativity of kids.
Here are some helpful videos:
*
*
*
*
The reason it's important to gain CONTEXT, PERSPECTIVE and PERSPICACITY on any technology is precisely because if you have an insufficient grasp of it, you end up putting inappropriate inputs into the policy frameworks and end up with..........increased likelihood of that policy failing.
INFORMED INPUTS => SMART PROCESSES = HOLISTIC OUTCOME
The UK used to lead the way scientifically. In recent times, the number of science and ICT students has dropped:
*
*
*
*
*
This later affects the UK's position in global ICT and business league tables.
So whilst Professor Greenfield is linking Twitter with socks and the under-informed science-ICT policies are letting down a generation, the Americans and the Orientals are striving ahead in harnessing the digital tools available.
Of course it's critically important to seek answers about the potential downsides of technologies. Still, as I noted:
By all means the scientists should examine the Web's potential dangers as well as its advantages. They should apply Descartes' reductionism, play Devil's Advocate with subjectivity, put it through the prisms of Ptolemy, mix it up with a healthy viral of Darwin and Mendelev and seek to synergize it all in some sort of Hawking's Grand Unifying equation.......
Just PLEASE do it in an intelligent, inclusive, informed, impartial and imaginative way. The reputation of the Royal institution and the associated policies on education, scientific investment and economic opportunity with technology depends on this.
Complain about this comment (Comment number 23)
Comment number 24.
At 14th Sep 2009, A_PERSON_NOT_A_BOT wrote:Here's another 140:
Back to code bunker. More finking, more fun for me + pulling up my socks.
Complain about this comment (Comment number 24)
Comment number 25.
At 14th Sep 2009, SheffTim wrote:Here鈥檚 a 382 ;-)
鈥楾he trouble with Milo, also, is that that film is carefully set up and scripted - I doubt he would pass a Turing test if anyone deviated from the script.鈥 #22
It鈥檚 clever programming; but extend the range of possible inputs > outputs 娄娄 stimulus > responses and this tech could probably cope with an increasing number of situations and verbal interactions. As has been mentioned a lot of human interaction is at a quite a banal level (think of children playing), in life people often behave in predicable ways. It could have an 鈥淚 don鈥檛 know, why don鈥檛 we try and find out鈥 response to questions outside its programmed range.
Link it with the idea of the semantic web and the tech could search for information to help frame an answer a question outside of its (Milos) existing programmed responses. Still not AI, but a technology that could cope with a range of eventualities.
I鈥檓 unconvinced that Turing鈥檚 test is one for true AI, more for a good facsimile of intelligence (Can it fool a human?); but the debate as to what intelligence is and if or how it can be fairly and accurately tested for and weighted (even in humans) could (and does) run and run. I think all current IQ tests used at present have inbuilt cultural assumptions - and depend on those tested having certain prior knowledge - built into them that skew results for example.
At the moment Augmented Reality is at the Top Trumps cards stage 鈥 but as with Twitter when first launched we don鈥檛 know how many ways people will pick it up develop it or use it in the future.
Augmented Reality displays at heritage sites could be one use, to make a room (etc) appear as it did when the building was first built, and visitors could interact with virtual objects.
The Web is only 20 yrs old, technology and the uses it is put to has evolved amazingly fast; who knows what the next 20 yrs will bring, or which of these many emerging technologies will become widely adopted?
One of the difficulties as far as education is concerned is that there are many pioneers using many different technologies and approaches, but joined up policy and funding often lags far behind innovations.
Complain about this comment (Comment number 25)