I'm trying to work out the "new math" that must be involved with this.
""There are 10 times more microbial cells associated with adult human bodies than there are human cells, so we are 90 percent microbial and 10 percent human," Gordon said. The bacteria can help people digest and absorb food that might otherwise be indigestible."
And my brane is boggling. Surely he meant " . . . so we are 90.9% microbial and 9.1% human . . ." ?
Sponsorship
Wednesday, November 11, 2009
Robotic Flight - The Easy Way.
It's just a thought. But our robot builders and developers now have the algorithm sussed, whereby you give a robot control over it's limbs, a sense of direction, and then a directive to get from here to there as quickly and efficiently as possible. And they do learn, quite quickly, how to get from here to there. So why not model this crittur, and then let it sort out for itself what to do next?
Tuesday, October 13, 2009
E-paper - Not Soft Enough?
Much as I have always thought that one day I'd be reading my news and books and stuff on a handheld device, I agree with Brian in his article - e-book readers are a waste of good research time. I do remember thinking at the time (back in about '96 - '97) that the new PDA class computers were the way to go. I was pretty sure they'd sort out the daylight reading issues, and the power issues, and I was only mildly enthusiastic when e-ink and e-paper first raised into public awareness.
It's definitely not the way to go. For everyone that says that LCD screens aren't readable under direct sunlight, I can only say that I also don't - ever - manage to read a book, paper, or magazine under direct sunlight either. My eyes are the limiting factor here, not the medium. I hate with a passion, the bedazzled feeling I get after looking at a paper page at a foot distance, in our harsh sunlight here in Western Australia. E-ink isn't going to improve that situation.
For anyone who quotes me advertising posters, I say go ahead, that's about the perfect use for e-ink and e-paper. At a distance of several yards or more, the reflected sunlight is fine for direct viewing. And advertising posters don't need to change quickly. Go develop e-paper for the advertising world, Lord knows they need a lesson in resource frugality.
At this stage I'm waiting for the fusion of the PMP and the tablet PC to roll out. Given LED backlighting, better transflective displays, and power saving features combined with battery technology that is now up to the stage of running a laptop for eight hours, I can't see it being too long before I see a folding touch-screen device a bit smaller than the size and heft of a paperback that I can read outdoors in the shade, indoors in artificial lighting, and which will do the full gamut of multimedia for me.
Stop wasting time putting hi-fi speakers in the things - I have rarely ever been so antisocial as to sing or read aloud a book I'm reading in public, nor do I like the idea of playing my music for everyone within earshot. Give me a bluetooth-tethered headset instead.
Don't bother with face and eye recognition or motion sensing, because again, when I'm out and about I have rarely (well, actually, never) waved my reading material around and done DDR moves with it.
Do give it a camera and mobile phone so I can use mobile broadband as well as WiFi, do give it enough storage to hold a couple of DVDs and my favourite library of a few hundred books and web pages. The secret isn't to let me copy everything to the device, nor to store everything in the cloud - it's to work out what I prefer to read at different times and locations, and do it for me.
And for God's sake use something else than a nuclear battery, even if it's technically feasible to make those. I have visions of a fresh-faced urban terrorist collecting "Nuke-Readers" and collecting the batteries into a critical mass...
One last thing: E-paper is not going to replace the multimedia experience we're used to. It's not going to replace dead-trees paper in the form of paperbacks. And it's not going to be any good in the other function that squares of paper can be commonly used for, either ...
It's definitely not the way to go. For everyone that says that LCD screens aren't readable under direct sunlight, I can only say that I also don't - ever - manage to read a book, paper, or magazine under direct sunlight either. My eyes are the limiting factor here, not the medium. I hate with a passion, the bedazzled feeling I get after looking at a paper page at a foot distance, in our harsh sunlight here in Western Australia. E-ink isn't going to improve that situation.
For anyone who quotes me advertising posters, I say go ahead, that's about the perfect use for e-ink and e-paper. At a distance of several yards or more, the reflected sunlight is fine for direct viewing. And advertising posters don't need to change quickly. Go develop e-paper for the advertising world, Lord knows they need a lesson in resource frugality.
At this stage I'm waiting for the fusion of the PMP and the tablet PC to roll out. Given LED backlighting, better transflective displays, and power saving features combined with battery technology that is now up to the stage of running a laptop for eight hours, I can't see it being too long before I see a folding touch-screen device a bit smaller than the size and heft of a paperback that I can read outdoors in the shade, indoors in artificial lighting, and which will do the full gamut of multimedia for me.
Stop wasting time putting hi-fi speakers in the things - I have rarely ever been so antisocial as to sing or read aloud a book I'm reading in public, nor do I like the idea of playing my music for everyone within earshot. Give me a bluetooth-tethered headset instead.
Don't bother with face and eye recognition or motion sensing, because again, when I'm out and about I have rarely (well, actually, never) waved my reading material around and done DDR moves with it.
Do give it a camera and mobile phone so I can use mobile broadband as well as WiFi, do give it enough storage to hold a couple of DVDs and my favourite library of a few hundred books and web pages. The secret isn't to let me copy everything to the device, nor to store everything in the cloud - it's to work out what I prefer to read at different times and locations, and do it for me.
And for God's sake use something else than a nuclear battery, even if it's technically feasible to make those. I have visions of a fresh-faced urban terrorist collecting "Nuke-Readers" and collecting the batteries into a critical mass...
One last thing: E-paper is not going to replace the multimedia experience we're used to. It's not going to replace dead-trees paper in the form of paperbacks. And it's not going to be any good in the other function that squares of paper can be commonly used for, either ...
Thursday, September 24, 2009
Skin Me A Surrogate
This article describes the search for a perfect skin for a robot. And it's true, many times over we humans have proven ourselves to be biased against all sorts of things, and odd skin is one of them. When's the last time you shook someone's cold, damp, pudgy-feeling hand and decided there and then that you didn't quite like or trust them?
So yes, finding a skin for a robot that we're prepared to accept is important. But I'd like to point out something, and that is the (as the article states) social aspects of touch. In an agrarian society, a rough dry and slightly damaged skin, and firm handshake is "normal" and acceptable. In a city environment, you'd shrink away ever so slightly from anyone proffering such a hand to shake.
Socially touching is considered differently in different countries. And while there are some people who can't abide furred animals and therefore don't have pets, most people enjoy stroking the cat or the dog, the feel of fur under the hand. And some people in fact consider this a social requisite to partnership and sex, so you have a segment of the population that we label "furries" to whom fur is the socially normal skin feel.
What I'm saying is that this kind of research is good, but don't go overboard on "social acceptance" because with a few years to get used to it, people can accept pretty much anything. As long as the covering has the mechanical properties the robot needs, and isn't razor-sharp or adamantium-hard, I'm pretty sure people would accept it. Maybe even fur for where it's appropriate... %)
Then at the end, they mention the Surrogates movie, and suddenly, it all doesn't matter again - after all, if everyone is in surrogate, it would be easier to program the surrogates to "feel" whatever covering the other robot surrogates have as 'perfect skin..."
So yes, finding a skin for a robot that we're prepared to accept is important. But I'd like to point out something, and that is the (as the article states) social aspects of touch. In an agrarian society, a rough dry and slightly damaged skin, and firm handshake is "normal" and acceptable. In a city environment, you'd shrink away ever so slightly from anyone proffering such a hand to shake.
Socially touching is considered differently in different countries. And while there are some people who can't abide furred animals and therefore don't have pets, most people enjoy stroking the cat or the dog, the feel of fur under the hand. And some people in fact consider this a social requisite to partnership and sex, so you have a segment of the population that we label "furries" to whom fur is the socially normal skin feel.
What I'm saying is that this kind of research is good, but don't go overboard on "social acceptance" because with a few years to get used to it, people can accept pretty much anything. As long as the covering has the mechanical properties the robot needs, and isn't razor-sharp or adamantium-hard, I'm pretty sure people would accept it. Maybe even fur for where it's appropriate... %)
Then at the end, they mention the Surrogates movie, and suddenly, it all doesn't matter again - after all, if everyone is in surrogate, it would be easier to program the surrogates to "feel" whatever covering the other robot surrogates have as 'perfect skin..."
You Want To Stick A Chip Where?
So - remember when I said Augmented people will be here sooner than expected? Here's another step closer: Scientists have developed a chip that implants in the eye and feeds signals to the optic nerve at the retina. They are not claiming they will be able to restore full sight to a person, and with good reason - our eye actually has totally piss-poor resolution, and even if they managed to hit every optic nerve ending the total resolution would be only a few thousand pixels...
This is because the eye doesn't work like a camera, where each pixel of a scene corresponds to a "pixel" or nerve ending on the optic nerve. Our eyes perform "saccades' which are rapid movements, and the brain is fast enough (and the saccades are accurately enough performed) to map a composite map into the brain. The eye scans the scene and builds it up a few thousand pixels at a time.
Our brains also fill in a lot of the tedious detail which isn't in the center of our vision, with generic extrapolations. The reason you don't notice is because it's not in your field of attention - and when you look at that part of the scene, it becomes the center of your field of attention and therefore the eye/brain start to record that part in more detail.
But it's not hard to track saccades, or even just the nerve impulses that tell the eye to saccade, and therefore the first scientists to build a chip that can map a scene onto that extended canvas that our brains use by simulating what happens naturally for sighted people, then full field vision will be possible. It's not too hard to emulate.
That's why, for the moment, those external devices that project back into the retina are effective, and are better than any implanted chip can be. The eye saccades and picture source in this case is not fixed with respect to the eye so it gets "swept" across the retina.
And of course, defence forces are also going to be interested in this kind of technology, because again, it provides a much needed situational awareness tool for their troops on the spot, the kind that can't be dropped on the floor and thus lost.
Me, I'm imagining myself at over 50 years of age being fitted with something like that for browsing the web, and i can see one, somewhat less obvious problem: As we age, our eyes lose conformity and our eyesight becomes limited. Because the implant is behind the cornea and lens which cause that blurriness, any image I see from it will be crystal clear and sharp, and focused at whatever distance. Without me needing glasses or correction.
If I'm only overlaying a web browser on my vision, I'm pretty sure the experience of seeing one object apparently at 2m distance being sharp and clear while a physical object beside it is blurred, would probably induce headaches and nausea, and problems seeing properly.
So for the moment, external retinal projectors are still the best bet for general augmented vision.
This is because the eye doesn't work like a camera, where each pixel of a scene corresponds to a "pixel" or nerve ending on the optic nerve. Our eyes perform "saccades' which are rapid movements, and the brain is fast enough (and the saccades are accurately enough performed) to map a composite map into the brain. The eye scans the scene and builds it up a few thousand pixels at a time.
Our brains also fill in a lot of the tedious detail which isn't in the center of our vision, with generic extrapolations. The reason you don't notice is because it's not in your field of attention - and when you look at that part of the scene, it becomes the center of your field of attention and therefore the eye/brain start to record that part in more detail.
But it's not hard to track saccades, or even just the nerve impulses that tell the eye to saccade, and therefore the first scientists to build a chip that can map a scene onto that extended canvas that our brains use by simulating what happens naturally for sighted people, then full field vision will be possible. It's not too hard to emulate.
That's why, for the moment, those external devices that project back into the retina are effective, and are better than any implanted chip can be. The eye saccades and picture source in this case is not fixed with respect to the eye so it gets "swept" across the retina.
And of course, defence forces are also going to be interested in this kind of technology, because again, it provides a much needed situational awareness tool for their troops on the spot, the kind that can't be dropped on the floor and thus lost.
Me, I'm imagining myself at over 50 years of age being fitted with something like that for browsing the web, and i can see one, somewhat less obvious problem: As we age, our eyes lose conformity and our eyesight becomes limited. Because the implant is behind the cornea and lens which cause that blurriness, any image I see from it will be crystal clear and sharp, and focused at whatever distance. Without me needing glasses or correction.
If I'm only overlaying a web browser on my vision, I'm pretty sure the experience of seeing one object apparently at 2m distance being sharp and clear while a physical object beside it is blurred, would probably induce headaches and nausea, and problems seeing properly.
So for the moment, external retinal projectors are still the best bet for general augmented vision.
Friday, September 18, 2009
Who'll Think Of The Space Junk?
When the amount of crap in orbit becomes enough of a problem that DARPA wants to find a solution, you begin to realise what pigs we really are. You also begin to realise that we're probably not going to make it through the weather crisis, either. Unless we have some creative solutions.
Solution to global warming? Put even more space junk up there to deflect some sun. Worry about cleaning it up in 20 - 100 years when we have the temperature and weather back in some semblance of balance. But it interferes with our satellites? Well I'd say with the kinds of austerity and cutting back on wastes of resources that global warming is going to bring, the least of your problems will be that satellite radio isn't working...
Solution to the space junk for whenever we want to clean it up? A couple of years ago (in 1993/1994 I seemt o recall) I was talking to a few NASA types via BBS echomail and we figured that there were ways to get reuseable craft up there from converting existing types of craft, using water ablation shields, etc etc etc yada yada... The purpose? To clean up space junk...
Part of the solution was to attract private investment, set up something along the lines of converted Blackbird aircraft and small remote controlled (which these days would be autonomous) scoop craft to collect the junk and bring it to your shuttle craft. In order to save bringing all that junk back down, the plan was to aggregate it at an L5 point, and at some stage begin to salvage some of the junk and cobble together a habitat.
See, the other thing is that space is not owned by anyone. You make a space station, you're sovereign. So in addition to having collected a fee for removing space junk, you've established a stockpile of salvaged material, a habitat, and in effect, a new world...
So will someone please think of the space junk?
Solution to global warming? Put even more space junk up there to deflect some sun. Worry about cleaning it up in 20 - 100 years when we have the temperature and weather back in some semblance of balance. But it interferes with our satellites? Well I'd say with the kinds of austerity and cutting back on wastes of resources that global warming is going to bring, the least of your problems will be that satellite radio isn't working...
Solution to the space junk for whenever we want to clean it up? A couple of years ago (in 1993/1994 I seemt o recall) I was talking to a few NASA types via BBS echomail and we figured that there were ways to get reuseable craft up there from converting existing types of craft, using water ablation shields, etc etc etc yada yada... The purpose? To clean up space junk...
Part of the solution was to attract private investment, set up something along the lines of converted Blackbird aircraft and small remote controlled (which these days would be autonomous) scoop craft to collect the junk and bring it to your shuttle craft. In order to save bringing all that junk back down, the plan was to aggregate it at an L5 point, and at some stage begin to salvage some of the junk and cobble together a habitat.
See, the other thing is that space is not owned by anyone. You make a space station, you're sovereign. So in addition to having collected a fee for removing space junk, you've established a stockpile of salvaged material, a habitat, and in effect, a new world...
So will someone please think of the space junk?
Wednesday, September 16, 2009
Syberious Stuffe
It's just occurred to me, so it must have occurred to other people around the world already - are we ready for a real cyberwar? This article on RISKS Digest (Networks and Nationalization With Respect to Cyberwar) crystallised an already-forming idea of mine. We've already liberated a lot of the proletariat, over the centuries. From serfdom to clerkdom to CEOdom, baby! But of course I mean more than that, I'm a devious bastard that thinks sideways. %)
What I mean is - speech empowered cavemen types, because it allowed the transmission of more complex ideas and procedures - in short, it allowed culture to blossom. Culture in it's turn created aggregations of people that were more than just an extended family, it turned us from a widespread population of creatures into much larger distinct social units of humans. Along the way, it stratified those social units so that we ended up with villages with headmen, hunters farmers and providers, and so forth.
Our next communications leap was writing. This let us cement those units over generations. Village traditions and historic tales, once committed to a more permanent record than oral tradition, kept the identity of those units from one generation to the next, and allowed other villages to be informed, to join villages into larger communities.
Writing was initially for the elite. Despite the limited reach of the written word, it achieved an agglomeration of villages, the dissemination of stories and legends, and of course, it provided a way to record tithes and taxes, births and deaths, and laws. Despite not being able to read, the population allowed themselves to be ruled by the power of those recorded words. Writing provided control over one another.
Then, writing and reading became more mainstream, and ordinary people were suddenly able to read and contribute. That empowered a great many people to begin to advance the sciences, the arts, the politics.
Finally, the printing press made all that knowledge available to pretty much anyone. Not by coincidence, the technological revolution quietly snuck in with this phase, and liked it here. Because suddenly anyone could learn the science, the oratory, the mathematics of the time, and many minds make mincemeat of stagnation.
As a side effect, more and more people could also get their works in print. Our "world library" turned from something that may have had five thousand well considered, well written, and accurate works, to something that had millions of works. Dilution occurred, you had to pick your books that you read in a lifetime. but somehow, people managed it. They sorted the gems from the overburden, and many went on to further their fields with more well-written works.
Printed books cemented countries and locked in politics, defined and entrenched fields of science, created larger units of social structure, some of which (to the horror of the aristocracy) seeped across old political borders and formed international bodies. Also, of course, smaller societies were formed within social units, clubs and arcane guilds, secret societies, and insurrective organisations.
Which brings us to the Internet. Nothing has shrunk the world, or aggregated those larger social units, quite like the Internet. Suddenly too, sifting all the books of the world and pulling threads from the dross looks like an easy task compared to absorbing the overload of material that's here online now. How did you come across this article?
But again, the medium has broken down barriers and made us a larger body. And again, smaller units are given much power to change things. It really is at the stage where I can put up a flag in my loungeroom, call it New Cyberia, and begin to take down the infrastructure of any country I want to, one power grid control at a time, where I can take over one missile or fire control network after another, ground aircraft and put ground traffic into disarray - all in the name of New Cyberia.
And if I'm clever about it, I could end up being given the keys to half the developed nations of the world in return for their ability to function properly again, without anyone ever seeing me or my flag, nor even knowing which country it's in...
That makes the remark about the teenager in their basement seem a lot more sinister and imminent than it did before. Because if I can think of it, someone is already working on doing it...
What I mean is - speech empowered cavemen types, because it allowed the transmission of more complex ideas and procedures - in short, it allowed culture to blossom. Culture in it's turn created aggregations of people that were more than just an extended family, it turned us from a widespread population of creatures into much larger distinct social units of humans. Along the way, it stratified those social units so that we ended up with villages with headmen, hunters farmers and providers, and so forth.
Our next communications leap was writing. This let us cement those units over generations. Village traditions and historic tales, once committed to a more permanent record than oral tradition, kept the identity of those units from one generation to the next, and allowed other villages to be informed, to join villages into larger communities.
Writing was initially for the elite. Despite the limited reach of the written word, it achieved an agglomeration of villages, the dissemination of stories and legends, and of course, it provided a way to record tithes and taxes, births and deaths, and laws. Despite not being able to read, the population allowed themselves to be ruled by the power of those recorded words. Writing provided control over one another.
Then, writing and reading became more mainstream, and ordinary people were suddenly able to read and contribute. That empowered a great many people to begin to advance the sciences, the arts, the politics.
Finally, the printing press made all that knowledge available to pretty much anyone. Not by coincidence, the technological revolution quietly snuck in with this phase, and liked it here. Because suddenly anyone could learn the science, the oratory, the mathematics of the time, and many minds make mincemeat of stagnation.
As a side effect, more and more people could also get their works in print. Our "world library" turned from something that may have had five thousand well considered, well written, and accurate works, to something that had millions of works. Dilution occurred, you had to pick your books that you read in a lifetime. but somehow, people managed it. They sorted the gems from the overburden, and many went on to further their fields with more well-written works.
Printed books cemented countries and locked in politics, defined and entrenched fields of science, created larger units of social structure, some of which (to the horror of the aristocracy) seeped across old political borders and formed international bodies. Also, of course, smaller societies were formed within social units, clubs and arcane guilds, secret societies, and insurrective organisations.
Which brings us to the Internet. Nothing has shrunk the world, or aggregated those larger social units, quite like the Internet. Suddenly too, sifting all the books of the world and pulling threads from the dross looks like an easy task compared to absorbing the overload of material that's here online now. How did you come across this article?
But again, the medium has broken down barriers and made us a larger body. And again, smaller units are given much power to change things. It really is at the stage where I can put up a flag in my loungeroom, call it New Cyberia, and begin to take down the infrastructure of any country I want to, one power grid control at a time, where I can take over one missile or fire control network after another, ground aircraft and put ground traffic into disarray - all in the name of New Cyberia.
And if I'm clever about it, I could end up being given the keys to half the developed nations of the world in return for their ability to function properly again, without anyone ever seeing me or my flag, nor even knowing which country it's in...
That makes the remark about the teenager in their basement seem a lot more sinister and imminent than it did before. Because if I can think of it, someone is already working on doing it...
Tuesday, September 1, 2009
Killer Technology Developed
There have been studies done that prove that texting while driving is extremely unsafe. And it's - luckily - been quite easy to spot a texting driver. Yet they still do it, because I've watched dozens in the last month or two, swerving ever so slightly, eyes downcast and some even hoisting the phone up to dash level to text, not really realising or caring that they can't focus on the close-up screen and the distant road conditions at the same time. It's a law of optics.
Then too it's been shown that even just having a conversation while driving can impair driving skill by as much as a good session at the local pub. And it's a bit harder to spot a driver with a bluetooth in their ear, but they still do that, too. And again, if you look for the telltale lapses of attention, you can usually tell which drivers are doing it.
Even pedestrians are not safe, with news suggesting that maybe having your music player's earpieces in and drowning out ambient noises makes you more prone to having accidents. While walking. For chrissakes. This isn't obvious to the people wearing their music?
Let's face it - people who multitask are actually very bad at it. We're not yet built to multitask. But that of course doesn't stop people doing it in droves, does it?
It's also becoming more and more difficult to spot the once obvious signs - drivers with their cellphone up at eye level or up to their ear are now able to use speech to text, inbuilt handsfree systems, and any number of other technological advances to enable them to become a little bit more unsafe. More power to them, as long as they take each other out and not me.
Or they can take out:
The pedestrians, who once upon a time you could tell were attention-impaired because they were hauling a boom box on their shoulders. Then they had reasonably visible headphones, and now they are a great deal harder to spot except for the white wires and their tendency to be found under buses, trains, and other vehicles with texting drivers...
Now there's a game changer. A technology that will make it less necessary to spot the distracted. Because they will be the "old school of distracted," and several orders of magnitude better than the people wearing this technology, which you won't be able to spot at all...
Look - I know I'm a bit of an evangelist for Augmentation, and I will always be. The more we develop, the more we are finding that augmentation is convenient and in some cases (artificial hearts, kidneys, etc) even vital.
I think people have an amazing capacity to assimilate this kind of augmentation into their daily lives, as witnessed by - well, as witnessed by the number of people who safely manage to integrate hands-free calls, car music systems, iPods, and making calls while walking, into their daily lives.
Yes - for every dickhead out there putting everyone's life and limb at risk, there are a few dozen that either choose an appropriate time to perform such attention-sapping behaviours, or else have managed to train themselves to perform multiple functions without significant deterioration of their survival skillsets.
It's easy enough to train people to integrate multiple information streams at once, and manage to coordinate their actions at the same time. It's a matter of teaching them how to assign attention to every stream, assign priorities, and then act on the highest priority inputs first.
The Defense forces are able to train soldiers to do that, generally by brute force repetition, but more and more by giving training designed to reinforce the attention/priority/action cycles. And they've been doing a smashingly good job at it, too - they can keep a soldier alive on the battlefield while listening to orders, watching the situation, doing quite complex tasks with their weapons and equipment, and even managing to breathe...
So I'm suggesting that perhaps rather than making it illegal to use a phone while driving or walking, let's make it a licensed activity. As part of the driver training, teach new drivers to deal with distractions, assign priorities, and carry out the relevant actions. Teach pedestrians how to maintain watchfulness while using the technology. Then make it one of the things they must be tested on if they want their license to include that activity.
You can't eliminate the technology, and pretending that by legislating you can make it go away is just fairy dust. This kind of stuff should be being taught at school level, and reinforced at every level-up people go through. Learning to drive a train? Then dealing with cellphones and text messages should be an integral part of that training.
Because no matter what, you already know that in two - three years you'll have the first people walking around with these contacts in and playing augmented real life pacman among the traffic...
Then too it's been shown that even just having a conversation while driving can impair driving skill by as much as a good session at the local pub. And it's a bit harder to spot a driver with a bluetooth in their ear, but they still do that, too. And again, if you look for the telltale lapses of attention, you can usually tell which drivers are doing it.
Even pedestrians are not safe, with news suggesting that maybe having your music player's earpieces in and drowning out ambient noises makes you more prone to having accidents. While walking. For chrissakes. This isn't obvious to the people wearing their music?
Let's face it - people who multitask are actually very bad at it. We're not yet built to multitask. But that of course doesn't stop people doing it in droves, does it?
It's also becoming more and more difficult to spot the once obvious signs - drivers with their cellphone up at eye level or up to their ear are now able to use speech to text, inbuilt handsfree systems, and any number of other technological advances to enable them to become a little bit more unsafe. More power to them, as long as they take each other out and not me.
Or they can take out:
The pedestrians, who once upon a time you could tell were attention-impaired because they were hauling a boom box on their shoulders. Then they had reasonably visible headphones, and now they are a great deal harder to spot except for the white wires and their tendency to be found under buses, trains, and other vehicles with texting drivers...
Now there's a game changer. A technology that will make it less necessary to spot the distracted. Because they will be the "old school of distracted," and several orders of magnitude better than the people wearing this technology, which you won't be able to spot at all...
Look - I know I'm a bit of an evangelist for Augmentation, and I will always be. The more we develop, the more we are finding that augmentation is convenient and in some cases (artificial hearts, kidneys, etc) even vital.
I think people have an amazing capacity to assimilate this kind of augmentation into their daily lives, as witnessed by - well, as witnessed by the number of people who safely manage to integrate hands-free calls, car music systems, iPods, and making calls while walking, into their daily lives.
Yes - for every dickhead out there putting everyone's life and limb at risk, there are a few dozen that either choose an appropriate time to perform such attention-sapping behaviours, or else have managed to train themselves to perform multiple functions without significant deterioration of their survival skillsets.
It's easy enough to train people to integrate multiple information streams at once, and manage to coordinate their actions at the same time. It's a matter of teaching them how to assign attention to every stream, assign priorities, and then act on the highest priority inputs first.
The Defense forces are able to train soldiers to do that, generally by brute force repetition, but more and more by giving training designed to reinforce the attention/priority/action cycles. And they've been doing a smashingly good job at it, too - they can keep a soldier alive on the battlefield while listening to orders, watching the situation, doing quite complex tasks with their weapons and equipment, and even managing to breathe...
So I'm suggesting that perhaps rather than making it illegal to use a phone while driving or walking, let's make it a licensed activity. As part of the driver training, teach new drivers to deal with distractions, assign priorities, and carry out the relevant actions. Teach pedestrians how to maintain watchfulness while using the technology. Then make it one of the things they must be tested on if they want their license to include that activity.
You can't eliminate the technology, and pretending that by legislating you can make it go away is just fairy dust. This kind of stuff should be being taught at school level, and reinforced at every level-up people go through. Learning to drive a train? Then dealing with cellphones and text messages should be an integral part of that training.
Because no matter what, you already know that in two - three years you'll have the first people walking around with these contacts in and playing augmented real life pacman among the traffic...
Thursday, August 20, 2009
Lindens Losers Again
I won't spend much time on this article except to say that the comments say it all really. I've used my cellphone to talk to people in SL before, it's much easier and less prone to breaking. And cheaper than paying Linden Labs another fee for something that already exists.
I said before that Second Life was the most unchanged thing in my life in the last year and a bit and hazarded that it's because some technology doesn't need changing. But on the issue of fixing known issues before introducing new erro- new features - I think LL are doing themselves no favours when they trot out this kind of crap though.
I said before that Second Life was the most unchanged thing in my life in the last year and a bit and hazarded that it's because some technology doesn't need changing. But on the issue of fixing known issues before introducing new erro- new features - I think LL are doing themselves no favours when they trot out this kind of crap though.
Tuesday, August 18, 2009
No Way This Can End Badly
Autonomous warehouse 'bots. Check.
Cyborg rat brains. Check.
Interfacing and interacting. Check.
Nothing to see here, move alo - wait. No, wait... Really...
The grids of neurons that exhibit oddly synchronised waves of activity, and that can "learn" in limited ways, they are first steps. Judging by the acceleration in technology by application of technology, we should see results from this in under five years, results along the lines of biological control units that can be stuck into any framework and figure out what to do and how to do it.
And the neat video in that last link (look down the bottom of the article, there are two thumbnails that link to a video each) shows how easily a cluster of neurons in one country can be effectively co-opted into operating a "body" in some other widely separated place.
All in all I'd expect to see this technology in military applications by the end of the year, (they always get the coolest toys first) and in some "robotic realistic cuddle pet" animatronic that you have to keep fed with a special nutrient solution by Christmas 2010... And running the world at any time after that...
Cyborg rat brains. Check.
Interfacing and interacting. Check.
Nothing to see here, move alo - wait. No, wait... Really...
The grids of neurons that exhibit oddly synchronised waves of activity, and that can "learn" in limited ways, they are first steps. Judging by the acceleration in technology by application of technology, we should see results from this in under five years, results along the lines of biological control units that can be stuck into any framework and figure out what to do and how to do it.
And the neat video in that last link (look down the bottom of the article, there are two thumbnails that link to a video each) shows how easily a cluster of neurons in one country can be effectively co-opted into operating a "body" in some other widely separated place.
All in all I'd expect to see this technology in military applications by the end of the year, (they always get the coolest toys first) and in some "robotic realistic cuddle pet" animatronic that you have to keep fed with a special nutrient solution by Christmas 2010... And running the world at any time after that...
Plurk'd, Twitter'd, and Tumblr'd
Do people "get" Twitter? Do they "get" Second Life? Judging from articles and stuff I read, no, most people don't get most of the important advances in communications technology at all. And that's what they are, communications technologies. People communicating with people, albeit in a way that a seeming 75% of those people don't actually "get."
For about a year now I've only been into SL to fix problems for sandbox users, and used Twitter mainly to keep an eye on a steadily growing group of friends - whom I also keep up with on Friendfeed, Plurk, Flickr, Facebook, and a growing number of blogs and photo/microblog/video/location services. It's not that I am afraid those people will slip through the net or that I might miss a gem of wisdom -- rather, it's because those services aren't all duplications of each other. They each have distinct purposes, and convey different information. Even when you look at two microblogging services, they have different aims and people use them for subtly different things.
Well, those that get it, they manage to use them in subtly different ways, anyway. It's what differentiates he savvy from the unwashed. To me, there are now two types of people in the world, those that can and then the rest. I also think that people can gradually absorb the technology, but not lose it. A bit like walking, you never forget.
Tonight I logged into SL and chatted to someone I hadn't met there for a good six months, and as usual we swapped building tips and good locations to build at. And it amazed me how quickly I was back doing all the things that make a good builder in SL, examining and prodding at everything the other person was showing me, keeping an eye on the various group messages, trying the builds out for myself...
And then it hit me - in the whole of my life over the last year and a bit, the world around me has been changing, weather patterns are different, politics changed forever, the economy in freefall and recovery, everything changing almost by the day. And the most malleable and innovative place in my life, Second Life, has been the most stable and the least changed... That's how I measure how well the users "get" something - if it's a good application that does its job, then it doesn't have to be changed because enough people will stick with it and make it their own.
Anyhow - I said all that to set this up - an article that says we'll have indoor holidays and robot prostitutes. I'm particularly impressed with a "visionary" who doesn't actually seem to get what he's talking about...
Before you go telling me that he must be right cos he's a perfesser, think on this: He claims to have factored in all the elements, climate change, technology, blah blah yada yada so why are his lips moving and all I smell is gently steaming male cow wastes?
I'll tell you why, it's because of the "...costs for basics such as electricity and food increased..." among other things. Paradoxically, as we pass peak oil and start generating really cheap sustainable energy from ever cheaper wind wave and solar energy collectors, Yeoman reckons electricity will increase in cost. And what's his brilliant answer to those posited higher costs? Of course! Why didn't I think of it? Instead of doing what operators have been doing for millennia and exploiting poor people, we're all going to use more robots that run on electricity! Unless his robots actually run on melted-down poor people...
And those "giant cruise ships" - what are they going to be powered by? Bullshit? Cos there's a headstart being made on that fuel source, I tell ya... And of course we're going to just keep making ever larger and more environmentally unfriendly wastes of resources like those cruise ships and refrigerated covered ski slopes in the tropics because they don't waste resources and electricity like crazy - right? Yeah right...
Resources - all resources - are stretching thinner and thinner, so I doubt that being seen prodigiously wasting them will be seen as a Good Thing in another few years, and certainly most of the world will frown when those precious resources are thrown away on a holiday destination...
Also he's blaming the need on the old "ageing population" crap. Come on, how many more generations of people does he think will become old weak frail cripples, especially when there's a choice of wasting resources on building a holiday destination for the geriatric frail aged or using those resources to make sure bodies don't age and infirmities don't eventuate?
I'm also having a problem with his dystopic future where it seems that most people won't have employment because those sneaky robots will have usurped all the jobs, working as they will for that hellaciously expensive electricity rather than a few dollars an hour. And of course that presumes that there will still be people with jobs that can afford to visit one of those holiday attractions, seeing how robots have taken all the jobs. How do the customers in his vision earn the money they will spend? To cover those gigantic food and electric costs? Something here is not a balanced economy.
Those are the main reasons I think that here's another person that just doesn't "get" what they're talking about, and I only hope he starts learning through osmosis, the basic facts and figures of what the next fifty years hold for us. I don't think that any of those visions will come true except as one-off projects that will survive as curiosities.
I'm more inclined to say that as electricity becomes the preferred power source over fossil fuels, and it becomes apparent that it's harder to fly Mohammed to the mountain by electric transport than it is to send a VR mountain to Mohammed's computer, this "blabber" that's currently not understood by so many will become the lingua franca (and eventually the mother tongue,) and a lot more holidays will be carried out a good deal closer to home, and a lot more often than just once a year.
And you can Plurk, Twitter, and Tumblr that.
For about a year now I've only been into SL to fix problems for sandbox users, and used Twitter mainly to keep an eye on a steadily growing group of friends - whom I also keep up with on Friendfeed, Plurk, Flickr, Facebook, and a growing number of blogs and photo/microblog/video/location services. It's not that I am afraid those people will slip through the net or that I might miss a gem of wisdom -- rather, it's because those services aren't all duplications of each other. They each have distinct purposes, and convey different information. Even when you look at two microblogging services, they have different aims and people use them for subtly different things.
Well, those that get it, they manage to use them in subtly different ways, anyway. It's what differentiates he savvy from the unwashed. To me, there are now two types of people in the world, those that can and then the rest. I also think that people can gradually absorb the technology, but not lose it. A bit like walking, you never forget.
Tonight I logged into SL and chatted to someone I hadn't met there for a good six months, and as usual we swapped building tips and good locations to build at. And it amazed me how quickly I was back doing all the things that make a good builder in SL, examining and prodding at everything the other person was showing me, keeping an eye on the various group messages, trying the builds out for myself...
And then it hit me - in the whole of my life over the last year and a bit, the world around me has been changing, weather patterns are different, politics changed forever, the economy in freefall and recovery, everything changing almost by the day. And the most malleable and innovative place in my life, Second Life, has been the most stable and the least changed... That's how I measure how well the users "get" something - if it's a good application that does its job, then it doesn't have to be changed because enough people will stick with it and make it their own.
Anyhow - I said all that to set this up - an article that says we'll have indoor holidays and robot prostitutes. I'm particularly impressed with a "visionary" who doesn't actually seem to get what he's talking about...
Before you go telling me that he must be right cos he's a perfesser, think on this: He claims to have factored in all the elements, climate change, technology, blah blah yada yada so why are his lips moving and all I smell is gently steaming male cow wastes?
I'll tell you why, it's because of the "...costs for basics such as electricity and food increased..." among other things. Paradoxically, as we pass peak oil and start generating really cheap sustainable energy from ever cheaper wind wave and solar energy collectors, Yeoman reckons electricity will increase in cost. And what's his brilliant answer to those posited higher costs? Of course! Why didn't I think of it? Instead of doing what operators have been doing for millennia and exploiting poor people, we're all going to use more robots that run on electricity! Unless his robots actually run on melted-down poor people...
And those "giant cruise ships" - what are they going to be powered by? Bullshit? Cos there's a headstart being made on that fuel source, I tell ya... And of course we're going to just keep making ever larger and more environmentally unfriendly wastes of resources like those cruise ships and refrigerated covered ski slopes in the tropics because they don't waste resources and electricity like crazy - right? Yeah right...
Resources - all resources - are stretching thinner and thinner, so I doubt that being seen prodigiously wasting them will be seen as a Good Thing in another few years, and certainly most of the world will frown when those precious resources are thrown away on a holiday destination...
Also he's blaming the need on the old "ageing population" crap. Come on, how many more generations of people does he think will become old weak frail cripples, especially when there's a choice of wasting resources on building a holiday destination for the geriatric frail aged or using those resources to make sure bodies don't age and infirmities don't eventuate?
I'm also having a problem with his dystopic future where it seems that most people won't have employment because those sneaky robots will have usurped all the jobs, working as they will for that hellaciously expensive electricity rather than a few dollars an hour. And of course that presumes that there will still be people with jobs that can afford to visit one of those holiday attractions, seeing how robots have taken all the jobs. How do the customers in his vision earn the money they will spend? To cover those gigantic food and electric costs? Something here is not a balanced economy.
Those are the main reasons I think that here's another person that just doesn't "get" what they're talking about, and I only hope he starts learning through osmosis, the basic facts and figures of what the next fifty years hold for us. I don't think that any of those visions will come true except as one-off projects that will survive as curiosities.
I'm more inclined to say that as electricity becomes the preferred power source over fossil fuels, and it becomes apparent that it's harder to fly Mohammed to the mountain by electric transport than it is to send a VR mountain to Mohammed's computer, this "blabber" that's currently not understood by so many will become the lingua franca (and eventually the mother tongue,) and a lot more holidays will be carried out a good deal closer to home, and a lot more often than just once a year.
And you can Plurk, Twitter, and Tumblr that.
Thursday, August 6, 2009
If It Takes Pictures, It's A Camera - Right?
Or perhaps wrong. Sony have released a gadget they call the Party Shot. I am having trouble calling this a "gadget' instead of what it is, which is a robot. People get this idea that a robot has to be something spectacular and just the other side of the Uncanny Valley but in fact it's a machine that can perform certain functions in an automated way. ("Robota" or worker is the origin of the word, Karel Capek is generally credited with the use of the word to mean mechanical workers.)
This is a gadget that follows the people in a room, focuses and composes the shot, and takes the shot. It does that in an independent and autonomous way, it uses the camera as the tool to accomplish that, and to me that's the definition of a robot.
It's not too far away from a description of another gadget that would have to be called a robot, and which Defense contractors, Defense personnel, and ethicists are arguing about at length and in depth:
The argument over whether or not to let a robot AI pull the trigger on an attack has been vexing DoD and their contractors for years now. I say, give Sony or some other commercial outfit the contract - and your killer drones will even send home nicely framed and "anti-shake"d pictures of the resultant carnage...
UPDATE: At least this article gets it right, it's robotic.
This is a gadget that follows the people in a room, focuses and composes the shot, and takes the shot. It does that in an independent and autonomous way, it uses the camera as the tool to accomplish that, and to me that's the definition of a robot.
It's not too far away from a description of another gadget that would have to be called a robot, and which Defense contractors, Defense personnel, and ethicists are arguing about at length and in depth:
This is a gadget that follows people, focuses, and takes the shot.
The argument over whether or not to let a robot AI pull the trigger on an attack has been vexing DoD and their contractors for years now. I say, give Sony or some other commercial outfit the contract - and your killer drones will even send home nicely framed and "anti-shake"d pictures of the resultant carnage...
UPDATE: At least this article gets it right, it's robotic.
Monday, August 3, 2009
The Case Of The Nanodiamond Meteorite
Science-y and slightly related to cyborgs and technology, is the question of nano-diamond meteorites. It occurs to me that this is one of those things I read when it happened, and didn't bat an eyelid. But leave it to 2AM and having just laid down to get to sleep, for a bit of e revelation to make itself felt...
It's composed of various forms of carbon. Out there in space, according to the second page of that same article, is a fairly large chunk of carbon which that meteorite was probably once a part of. And carbon is a prerequisite for life. (Life as we carbon-based lifeforms know it, at any rate.) I also get this impression that pretty much every piece of carbon I've ever seen was part of a living creature at some stage. Like, there's no such thing as a "carbon rock" that I'm aware of.
Coal is a carbon rock but it is formed from decomposed and compressed living creatures. Diamonds are compressed coals. Every bit of carbon I've ever seen in my life and travels was alive and kicking at some stage in its lifecycle.
What are the odds that all the carbon in space is just inorganic in origin, given our experience here on Earth is so vastly different? Which means that perhaps there is a good reason for presuming the existence of a lot of other life forms in the rest of the Universe.
And that takes me to a random thought about electronic/mechanical life, which we're on the brink of creating, and which is going to be based on silicon and other materials that we don't consider a building block of life, and which (as far as we can tell) were never part of a living thing the way carbon was.
Maybe that's going to be one of the bases of a definition of life? That "life" has to be able to recycle it's own elements and molecules, over and over?
It's composed of various forms of carbon. Out there in space, according to the second page of that same article, is a fairly large chunk of carbon which that meteorite was probably once a part of. And carbon is a prerequisite for life. (Life as we carbon-based lifeforms know it, at any rate.) I also get this impression that pretty much every piece of carbon I've ever seen was part of a living creature at some stage. Like, there's no such thing as a "carbon rock" that I'm aware of.
Coal is a carbon rock but it is formed from decomposed and compressed living creatures. Diamonds are compressed coals. Every bit of carbon I've ever seen in my life and travels was alive and kicking at some stage in its lifecycle.
What are the odds that all the carbon in space is just inorganic in origin, given our experience here on Earth is so vastly different? Which means that perhaps there is a good reason for presuming the existence of a lot of other life forms in the rest of the Universe.
And that takes me to a random thought about electronic/mechanical life, which we're on the brink of creating, and which is going to be based on silicon and other materials that we don't consider a building block of life, and which (as far as we can tell) were never part of a living thing the way carbon was.
Maybe that's going to be one of the bases of a definition of life? That "life" has to be able to recycle it's own elements and molecules, over and over?
Tuesday, July 28, 2009
Swarm Wars
Was just watching a news promo where the focus of one of the stories is on a child's bicycle tyre with drugs hidden inside. Have also read (of course) about the drug mini-subs that are being used to run stuff into the US. And I've reflected that all through history there have been various kinds of drug struggles between the eternal triangle of those who source, those who use, and those who prohibit.
There is a grass roots support group of people who want the drugs, who end up as the victims of course but perhaps that's the function of drugs, to act as some kind of Darwinian selector. Who knows. There is a group of people who want to supply those drugs in return for material gain of their own, no matter who that kills. And there are prohibitionists who act to try and save one group from the other.
It's kind of like war.
But, of course, martial type warfare is so much more advanced, for some reason, than drug wars. They have UAVs and packbots, and plenty of them. They save putting lives at risk, are relatively cheap and easy to put into the field, and can slip around almost undetected while carrying a warhead, camera, or other payload to a destination quite a distance away.
Oh yeah, and this follows on the heels of watching a TV article showing a small commercial factory unit successfully making small UAVs and delivery systems.
See, I wonder how long before you see (or don't see) flights of hundreds of small hard to detect unmanned robots, each carrying a small amount of drugs, random flight paths to widely spread destinations, and making it pretty much impossible to stop all of them. I'm almost betting within a few years you'll read about "swarms" of these things, drug suppliers aren't stupid and this is now proven technology...
This may be the last decade before the Coming Of The Drug Locusts...
There is a grass roots support group of people who want the drugs, who end up as the victims of course but perhaps that's the function of drugs, to act as some kind of Darwinian selector. Who knows. There is a group of people who want to supply those drugs in return for material gain of their own, no matter who that kills. And there are prohibitionists who act to try and save one group from the other.
It's kind of like war.
But, of course, martial type warfare is so much more advanced, for some reason, than drug wars. They have UAVs and packbots, and plenty of them. They save putting lives at risk, are relatively cheap and easy to put into the field, and can slip around almost undetected while carrying a warhead, camera, or other payload to a destination quite a distance away.
Oh yeah, and this follows on the heels of watching a TV article showing a small commercial factory unit successfully making small UAVs and delivery systems.
See, I wonder how long before you see (or don't see) flights of hundreds of small hard to detect unmanned robots, each carrying a small amount of drugs, random flight paths to widely spread destinations, and making it pretty much impossible to stop all of them. I'm almost betting within a few years you'll read about "swarms" of these things, drug suppliers aren't stupid and this is now proven technology...
This may be the last decade before the Coming Of The Drug Locusts...
Wednesday, July 15, 2009
Moth Wing Nanos?
Just a sudden, intense, very vivid flash of memory:
When I was in my late 20's, my father and I were discussing huge scientific topics, if I recall, it was something along the lines of how long to spit-roast a pig in the hot 43C (100F) summer of Wittenoom, Western Australia. It included discussions of the effects of various bastings and a possible cure or marinade in plastic wrap for a day beforehand, and progressed via the (then still fairly new) concept of radio wave cooking (which became the familiar microwave of today) then meandered off into discussions of science and technology. We rambled a lot over a few beers, did Dad and I...
In the middle of that conversation, I killed a moth that was threatening to slurp up precious beer, and there was a cloud of dust from the wings of the moth, and dad got animated and mentioned that the dust off moth wings was harmful to inhale. He'd seen research that had established that moth wing dust was composed of - and I don't for the life of me remember if he actually said "nanoparticles" or just "microscopic particles" but it was pretty much the first time I'd actually thought about the properties of materials on a nano scale.
I incorporated a particular nanoparticle, a "monomolecule knife," into one of my unpublished Cycle novellas, on the back of thinking about nanoparticles for the next five years. And I've seen that science researchers have developed particular carbon nanotube structures based on examples from the animal kingdom, and now there's been advances in adhesive based on the micro hairs on the feet of the gecko. Other researchers have made significantly large single molecules which reminded me of the mono knife in my novella.
But so far I haven't heard about any nanotech researchers studying the fuzz on moth wings for an inspiration...
Just wondering - I tend to think of these things just around the same time as other people do, and often see my brainfarts made concrete by the magic of parallel development...
When I was in my late 20's, my father and I were discussing huge scientific topics, if I recall, it was something along the lines of how long to spit-roast a pig in the hot 43C (100F) summer of Wittenoom, Western Australia. It included discussions of the effects of various bastings and a possible cure or marinade in plastic wrap for a day beforehand, and progressed via the (then still fairly new) concept of radio wave cooking (which became the familiar microwave of today) then meandered off into discussions of science and technology. We rambled a lot over a few beers, did Dad and I...
In the middle of that conversation, I killed a moth that was threatening to slurp up precious beer, and there was a cloud of dust from the wings of the moth, and dad got animated and mentioned that the dust off moth wings was harmful to inhale. He'd seen research that had established that moth wing dust was composed of - and I don't for the life of me remember if he actually said "nanoparticles" or just "microscopic particles" but it was pretty much the first time I'd actually thought about the properties of materials on a nano scale.
I incorporated a particular nanoparticle, a "monomolecule knife," into one of my unpublished Cycle novellas, on the back of thinking about nanoparticles for the next five years. And I've seen that science researchers have developed particular carbon nanotube structures based on examples from the animal kingdom, and now there's been advances in adhesive based on the micro hairs on the feet of the gecko. Other researchers have made significantly large single molecules which reminded me of the mono knife in my novella.
But so far I haven't heard about any nanotech researchers studying the fuzz on moth wings for an inspiration...
Just wondering - I tend to think of these things just around the same time as other people do, and often see my brainfarts made concrete by the magic of parallel development...
Monday, July 13, 2009
VIPRE Virus and Malware Protection
As many of you know, I worked as a System/Network Administrator for 15 years before my health caught up with me, and in that time I've made an observation or two about people. No matter what I did to protect the workstations, someone always found a way around it eventually. And of course, because I listened to my users, I found out what the objection was - performance hit. Any antivirus solution I deployed, caused serious performance degradation on the older boxes of the day.
It's to be expected then that with malware, machines, and malware protection all scaling up at roughly the same rate, that today's protection software would make about a similar percentage hit on performance of today's machines. To a great extent, this is sort of true. Modern virus and malware protection software causes a bit less of a slowdown, but every software I've tried still made a significant difference to the machines I tried them on.
Sunbelt Software have hired me to say a few words about their Antivirus Software named VIPRE. Being a conscientious type, I downloaded and installed the free 15 day trial of the software, and being a bit less than conscientious %) I then went to sleep and let it download the definitions and perform the first scan. That would be because I was doing this around 3AM my time and fell asleep at the desk. Therefore, I can't tell you if there was any thrashing the first time it ran - but I can tell you that I didn't notice it in the following few days, where every other A/V program I've ever known has caused unpredictable lagging and slowdowns.
Aside from the definition files, VIPRE can also run suspect programs in a virtual environment so it can't do any harm, helping VIPRE to identify and isolate malware. And while I don't personally use Outlook, VIPRE integrates with Outlook to catch email-borne malware as well.
Sunbelt Software have 15 day trial software available, I haven't uninstalled mine yet because, quite frankly, I'm thinking I might upgrade it to the full version. If I'd had something like this back when I was working, I'm pretty sure I'd have been able to convince users to not try and unload their A/V software...
As a further incentive, Sunbelt are also offering a price reduction (50%!) on their personal firewall product if you buy VIPRE. Sunbelt Personal Firewall is ranked quite highly by other writers, and if you were looking to protect your machine on multiple levels then you probably couldn't go far wrong if you took advantage of that offer. Even on a home computer or a machine already behind a home router firewall or office firewall, putting a personal one on your PC is still a good idea.
It's to be expected then that with malware, machines, and malware protection all scaling up at roughly the same rate, that today's protection software would make about a similar percentage hit on performance of today's machines. To a great extent, this is sort of true. Modern virus and malware protection software causes a bit less of a slowdown, but every software I've tried still made a significant difference to the machines I tried them on.
Sunbelt Software have hired me to say a few words about their Antivirus Software named VIPRE. Being a conscientious type, I downloaded and installed the free 15 day trial of the software, and being a bit less than conscientious %) I then went to sleep and let it download the definitions and perform the first scan. That would be because I was doing this around 3AM my time and fell asleep at the desk. Therefore, I can't tell you if there was any thrashing the first time it ran - but I can tell you that I didn't notice it in the following few days, where every other A/V program I've ever known has caused unpredictable lagging and slowdowns.
Aside from the definition files, VIPRE can also run suspect programs in a virtual environment so it can't do any harm, helping VIPRE to identify and isolate malware. And while I don't personally use Outlook, VIPRE integrates with Outlook to catch email-borne malware as well.
Sunbelt Software have 15 day trial software available, I haven't uninstalled mine yet because, quite frankly, I'm thinking I might upgrade it to the full version. If I'd had something like this back when I was working, I'm pretty sure I'd have been able to convince users to not try and unload their A/V software...
As a further incentive, Sunbelt are also offering a price reduction (50%!) on their personal firewall product if you buy VIPRE. Sunbelt Personal Firewall is ranked quite highly by other writers, and if you were looking to protect your machine on multiple levels then you probably couldn't go far wrong if you took advantage of that offer. Even on a home computer or a machine already behind a home router firewall or office firewall, putting a personal one on your PC is still a good idea.
Sunday, July 12, 2009
Dealing With The Internet In The Workplace.
Most companies, since companies began to be the preferred business unit, have wanted just a few things from their employees: Be as young and energetic as possible; Be as experienced as the older staff; Spend 100% of your time working, no meal or toilet breaks would be just bewdiful thanks; Work for junior rates doing a senior job.
After all, it's not much to ask, is it? And yes yes yes I'm being sarcastic. The work day for all critters seems to be a big long but amusing search for food interspersed with intervals of work, not the other way around. Articles like this one demonstrate the efforts that management have to go to, in order to try and get that work/leisure/effort/effect balance right. Do you open the Internet? Block most of it? Have company firewalls, or secure application servers?
Let me put it this way - twenty years ago, how did companies prevent the equivalent circumstances? Instead of a firewall, there was a mailroom that inspected stuff going inbound and outward. Long employee chats with colleagues were examined for whether they were relevant to work or not, and then controlled as needed, by supervisors. And try as they might, employers had no way of preventing employees from walking out with company secrets firmly engraved in their brains, any more than they have a way today to prevent a company database walking out on an iPod or MP3 player or memory stick or card...
The thing that worked best, it turns out, is maintaining watchfulness. So here's a thought, given freely after nearly two decades of dealing with users and technology: Watch, Decide, Act.
Watch what happens in your office and on your LAN. Are your servers logging and flagging unusual events? That's the first thing you should be watching - and in order for the "unusual" flags to apply, you have to decide what constitutes "usual" and "unusual." Capture all traffic and put it through transparent proxies. not to prevent, but to record it. Experiment with the output of the logfile. A good analyser program will soon start telling you which machines spent how much time surfing what, where, and when.
Similarly monitor all accesses to your own servers and workstations. Is that new person poking around on the Sales VP's machine? That might be deemed "unusual," unless they are the SVP's secretary or PA. Is someone repeatedly trying to send malformed traffic to your SQL server? Time to check that they're not hacking you, or that their workstation might not be compromised by a Trojan.
As has been repeatedly pointed out, you can't prevent sufficiently determined employees from doing all the above and more. Shutting the barn door after the horse has bolted may well lose you a horse, but shutting it before the horse bolts, prevents anything from happening in the barn at all. Making the door more person sized means all your chickens will still be able to escape, and raising it still won't prevent the rats and pigeons from using it...
The problem lies in working out what's usual and useful, compared to unusual and harmful. Block YouTube or FaceBook? Fine, but some people actually use those in the course of their duties. I might find it easier to post a video and let a sales prospect know about it via their FB account. On the other hand, I might also be using those two to dig up dirt on colleagues to coerce their cooperation with a pet project or a project to steal data...
In all the time I worked in IT and system and network admininstration, the problems such as the above were always "unusual," and that's generally how they were picked up, precisely because they were unusual. Locking down access and traffic generally resulted in less bandwidth, but no discernible change to the risk/benefit ratio. I became a great logfile reader, and picked up no end of minor breaches that way. There are now programs that do what I did, but they need to be trained and set up, and usually the person that has to do that is the system admin.
In an age where personal electronics is everywhere, it's also possible for an employee to place company data onto their personal multimedia device, then connect right up to the Mcdonalds hotspot right outside your office and upload that data - it takes only minutes, or even seconds given a suitably skilled person - and all your logs would show is that employee "Z" accessed the payroll database for a bit longer than normal.
That information didn't - and can't - help you at the time, but it will form a trail that can be backtracked on when you discover that your best people have been headhunted at salaries that are just the right margin above the salary you were paying them...
"Blocking" the external WiFi hotspot? You may as well try and hold back water with a flyscreen. And indeed it may well be illegal to block wireless communications, in some places, and with certain forms of wireless. (You may not, for example and as far as I'm aware) block mobile phone signals at any time, and what if your superduper "cellphone buster" that you've placed in operation to block employees from spending all their time calling friends and family also blocks cellphone access to the company next door? And if you reduce the range of the quencher, then you'll find knots of employees in the areas that the signal misses...
My personal belief is that keeping people engaged, involved, rewarded, and stimulated is the best way to command loyalty, and watchfulness to make sure that this loyalty doesn't waver is is the second step.
After all, it's not much to ask, is it? And yes yes yes I'm being sarcastic. The work day for all critters seems to be a big long but amusing search for food interspersed with intervals of work, not the other way around. Articles like this one demonstrate the efforts that management have to go to, in order to try and get that work/leisure/effort/effect balance right. Do you open the Internet? Block most of it? Have company firewalls, or secure application servers?
Let me put it this way - twenty years ago, how did companies prevent the equivalent circumstances? Instead of a firewall, there was a mailroom that inspected stuff going inbound and outward. Long employee chats with colleagues were examined for whether they were relevant to work or not, and then controlled as needed, by supervisors. And try as they might, employers had no way of preventing employees from walking out with company secrets firmly engraved in their brains, any more than they have a way today to prevent a company database walking out on an iPod or MP3 player or memory stick or card...
The thing that worked best, it turns out, is maintaining watchfulness. So here's a thought, given freely after nearly two decades of dealing with users and technology: Watch, Decide, Act.
Watch what happens in your office and on your LAN. Are your servers logging and flagging unusual events? That's the first thing you should be watching - and in order for the "unusual" flags to apply, you have to decide what constitutes "usual" and "unusual." Capture all traffic and put it through transparent proxies. not to prevent, but to record it. Experiment with the output of the logfile. A good analyser program will soon start telling you which machines spent how much time surfing what, where, and when.
Similarly monitor all accesses to your own servers and workstations. Is that new person poking around on the Sales VP's machine? That might be deemed "unusual," unless they are the SVP's secretary or PA. Is someone repeatedly trying to send malformed traffic to your SQL server? Time to check that they're not hacking you, or that their workstation might not be compromised by a Trojan.
As has been repeatedly pointed out, you can't prevent sufficiently determined employees from doing all the above and more. Shutting the barn door after the horse has bolted may well lose you a horse, but shutting it before the horse bolts, prevents anything from happening in the barn at all. Making the door more person sized means all your chickens will still be able to escape, and raising it still won't prevent the rats and pigeons from using it...
The problem lies in working out what's usual and useful, compared to unusual and harmful. Block YouTube or FaceBook? Fine, but some people actually use those in the course of their duties. I might find it easier to post a video and let a sales prospect know about it via their FB account. On the other hand, I might also be using those two to dig up dirt on colleagues to coerce their cooperation with a pet project or a project to steal data...
In all the time I worked in IT and system and network admininstration, the problems such as the above were always "unusual," and that's generally how they were picked up, precisely because they were unusual. Locking down access and traffic generally resulted in less bandwidth, but no discernible change to the risk/benefit ratio. I became a great logfile reader, and picked up no end of minor breaches that way. There are now programs that do what I did, but they need to be trained and set up, and usually the person that has to do that is the system admin.
In an age where personal electronics is everywhere, it's also possible for an employee to place company data onto their personal multimedia device, then connect right up to the Mcdonalds hotspot right outside your office and upload that data - it takes only minutes, or even seconds given a suitably skilled person - and all your logs would show is that employee "Z" accessed the payroll database for a bit longer than normal.
That information didn't - and can't - help you at the time, but it will form a trail that can be backtracked on when you discover that your best people have been headhunted at salaries that are just the right margin above the salary you were paying them...
"Blocking" the external WiFi hotspot? You may as well try and hold back water with a flyscreen. And indeed it may well be illegal to block wireless communications, in some places, and with certain forms of wireless. (You may not, for example and as far as I'm aware) block mobile phone signals at any time, and what if your superduper "cellphone buster" that you've placed in operation to block employees from spending all their time calling friends and family also blocks cellphone access to the company next door? And if you reduce the range of the quencher, then you'll find knots of employees in the areas that the signal misses...
My personal belief is that keeping people engaged, involved, rewarded, and stimulated is the best way to command loyalty, and watchfulness to make sure that this loyalty doesn't waver is is the second step.
Thursday, July 9, 2009
Veni Vidi Virus
And here's the rest of the world catching up to my thoughts from years ago... In the dusty archives somewhere, I think I had an article about what will happen to implant wearers once those implants connect part of their real life bodies to the virtual world, to the real life world of script kiddies, sociopathic hackers, and other online entrepreneurs.
Here's one of the things that worried me then, and worries me now: If I have a prosthetic limb or just an enhancement to existing limbs, and this is controlled via a brain implant, which in turn has wireless connectivity to allow the control and adjustment of the augmentation. And if some script kiddie finds a way to reverse a few numbers somewhere. And I happen to be walking along an elevated footpath, I think "move right" and the brain implant issues a "move right" and the augmented limb then moves left. And happens to knock a fellow pedestrian off the footpath, they crack their head, and subsequently die.
Who's guilty now? Me for being the one who physically carried out the action leading to the death? The manufacturer of the system for not building in safeguards and intrusion protection? The hacker, wherever and whoever they may be, and who may have carried out the hack without any knowledge of where i was and what I was doing at the time?
This situation isn't just going to apply to body-worn augments, of course. Assume I'm and early adopter and have a personal robot assistant. Or (and this would be one reason car manufacturers might be a tad shy to put automatic drive controls in their cars) I have a personal transporter that pretty much runs on autopilot and connects via mobile broadband to download routes from Google Maps. Only Google Maps has been hacked and along with the map it downloads a zero-day payload...
You can see that where the Law has been slow to catch up with current cyber wrongdoing, the rate of change is about to ratchet up by another whole order of moral and legal decisions needing to be made.
Here's one of the things that worried me then, and worries me now: If I have a prosthetic limb or just an enhancement to existing limbs, and this is controlled via a brain implant, which in turn has wireless connectivity to allow the control and adjustment of the augmentation. And if some script kiddie finds a way to reverse a few numbers somewhere. And I happen to be walking along an elevated footpath, I think "move right" and the brain implant issues a "move right" and the augmented limb then moves left. And happens to knock a fellow pedestrian off the footpath, they crack their head, and subsequently die.
Who's guilty now? Me for being the one who physically carried out the action leading to the death? The manufacturer of the system for not building in safeguards and intrusion protection? The hacker, wherever and whoever they may be, and who may have carried out the hack without any knowledge of where i was and what I was doing at the time?
This situation isn't just going to apply to body-worn augments, of course. Assume I'm and early adopter and have a personal robot assistant. Or (and this would be one reason car manufacturers might be a tad shy to put automatic drive controls in their cars) I have a personal transporter that pretty much runs on autopilot and connects via mobile broadband to download routes from Google Maps. Only Google Maps has been hacked and along with the map it downloads a zero-day payload...
You can see that where the Law has been slow to catch up with current cyber wrongdoing, the rate of change is about to ratchet up by another whole order of moral and legal decisions needing to be made.
Wednesday, July 8, 2009
Saw Point
As a teenager, I was pretty handy in manual arts, both woodwork and metalwork. I remember once I left high school pricing the odd tools for handyman jobs both at my parents' house and my own projects, and one of the more highly priced (and prized) of my hand tools was a Disston foxtail saw. Back then I didn't know the background or the history of Disston saws, I just knew that they were one of the better quality saws available, they were imported from "overseas," and I saved a few weeks before I could buy mine.
Now the back story is filled in for me, by this short story and the lovely slideshow of pictures, of the history of Disston Saw Works in Philadelphia.
Now the back story is filled in for me, by this short story and the lovely slideshow of pictures, of the history of Disston Saw Works in Philadelphia.
Tuesday, June 9, 2009
Upload Thyself.
So now we're going to get a method to upload your body to a computer. Fair enough, it's just a virtual body, but that's all you need in virtual life. I've already posited that this will happen, and that we'll also get the technology to upload our brains. (Don't ask me when - look wayyyy back in this blog, cos I predicted this wayyyyy back...)
So am I going to sit back smugly and say I tolja so? YOU BETCHA! %)
Tolja so! Nyah!
So am I going to sit back smugly and say I tolja so? YOU BETCHA! %)
Tolja so! Nyah!
Wednesday, June 3, 2009
Tuesday, May 26, 2009
Day Not At The Office
Maybe it's working in IT. Or maybe just working at an office that Scott Adams could have based his comic strip on. But it puts me of a mind to agree with this Treehugger article. The days of the office are over.
One of my least favourite activities had to be meetings. Once, sometimes twice a week. To begin with. But then suddenly there were meetings to schedule meetings, meetings to discuss stuff raised at other meetings, and more. I'd often take a wirelessly-connected laptop to those meetings so that while it might look like I only had my notes and spreadsheets with me, I was actually remote logged into the servers and doing the work that I would have been doing had the damn meeting not been called...
So I agree - in a teleconf, who hasn't been doing other things? And why not? That whole argument by Lane wallace smacks of someone who doesn't have focus, or at least who doesn't trust the rest of humanity to be as focused as her. Yes I can understand that some people would use teleworking as their chance to goof off and just draw wages - but that's where management have to take a responsibility for monitoring and - surprise! - managing their teams. If someone isn't performing, invite them to the (much smaller) office for a face to face if you must, or preferably have a telemeeting with them. If that doesn't get their attention and their work ethic, then dismissal is always an option - after all, there's going to be ONE person in the world who will see that job as interesting and an opportunity...
I've worked from home with people around the globe, on servers at the end of a chain of VLANs and VPNs and office networks, and even for an IT administrator who needs to reboot and restart servers, it was possible to set up mechanisms that meant I rarely had to be onsite. I've done helpdesk and remote admin for people up and down the state - while sitting on the edge of the bed watching TV and chatting with my partner.
Our Sales force were more on the road than at the office, and didn't even use a home office, instead using their laptops, mobile phones, and a dash of ingenuity to work remotely long before teleworking became a buzzword. Most of our office staff that were in the office could easily have worked from home instead, the only thing keeping them at work was the management who were not at all receptive to teleworking or trusting of their staff to remain work-centric if they weren't under a watchful gaze and itchy whip hand. I've got news for them, 75% of the office staff goofed off for periods of from 30 to 120 minutes every day anyway...
Lastly - I think I mentioned in another blog post that smaller enterprises are going to be a bit of a market force to deal with. And it's also going to be the enterprises that work smarter, and can cover more of the globe. You can organise almost anything using the Internet these days, and the 'almost' will be covered before another year or two have gone, mark my words. Instead of having an office full of local people - many of whom will suffer from that "I'm only here for the wages" syndrome - you can pick a smaller, dedicated team from anywhere in the world. And have a presence 24/7, everywhere.
They say that if you find a job you like, you'll never work for a living again. And if you raise your sights when looking for staff to fill positions, you're more likely to find that person that actually likes the job you're offering, and reap the benefits...
One of my least favourite activities had to be meetings. Once, sometimes twice a week. To begin with. But then suddenly there were meetings to schedule meetings, meetings to discuss stuff raised at other meetings, and more. I'd often take a wirelessly-connected laptop to those meetings so that while it might look like I only had my notes and spreadsheets with me, I was actually remote logged into the servers and doing the work that I would have been doing had the damn meeting not been called...
So I agree - in a teleconf, who hasn't been doing other things? And why not? That whole argument by Lane wallace smacks of someone who doesn't have focus, or at least who doesn't trust the rest of humanity to be as focused as her. Yes I can understand that some people would use teleworking as their chance to goof off and just draw wages - but that's where management have to take a responsibility for monitoring and - surprise! - managing their teams. If someone isn't performing, invite them to the (much smaller) office for a face to face if you must, or preferably have a telemeeting with them. If that doesn't get their attention and their work ethic, then dismissal is always an option - after all, there's going to be ONE person in the world who will see that job as interesting and an opportunity...
I've worked from home with people around the globe, on servers at the end of a chain of VLANs and VPNs and office networks, and even for an IT administrator who needs to reboot and restart servers, it was possible to set up mechanisms that meant I rarely had to be onsite. I've done helpdesk and remote admin for people up and down the state - while sitting on the edge of the bed watching TV and chatting with my partner.
Our Sales force were more on the road than at the office, and didn't even use a home office, instead using their laptops, mobile phones, and a dash of ingenuity to work remotely long before teleworking became a buzzword. Most of our office staff that were in the office could easily have worked from home instead, the only thing keeping them at work was the management who were not at all receptive to teleworking or trusting of their staff to remain work-centric if they weren't under a watchful gaze and itchy whip hand. I've got news for them, 75% of the office staff goofed off for periods of from 30 to 120 minutes every day anyway...
Lastly - I think I mentioned in another blog post that smaller enterprises are going to be a bit of a market force to deal with. And it's also going to be the enterprises that work smarter, and can cover more of the globe. You can organise almost anything using the Internet these days, and the 'almost' will be covered before another year or two have gone, mark my words. Instead of having an office full of local people - many of whom will suffer from that "I'm only here for the wages" syndrome - you can pick a smaller, dedicated team from anywhere in the world. And have a presence 24/7, everywhere.
They say that if you find a job you like, you'll never work for a living again. And if you raise your sights when looking for staff to fill positions, you're more likely to find that person that actually likes the job you're offering, and reap the benefits...
Friday, May 22, 2009
Can Haz New New Inventors Plz?
I've just had the most wonderful trip around the garden path, and I haven't even left my chair... As long-time readers know, I have ideas, eminently capitalisable ideas, and of the last five years at least, those ideas have been focused on making a difference to energy use, climate change, and sustainable resources use.
Today I think I experienced the ultimate "innovation" oxymoron. I went to an Australian Government department whose sole function is to nurture innovative ideas, and got shunted around Australia a bit, then finally told that they don't actually deal in innovation, but in fully-formed innovative business ideas. That was the main gist of the conversation I had with the various people on the phone. "We're not actually interested in innovation so much as in the business making money from innovation..."
There are research grants, too, yes, I nearly forgot. But for ideas and concepts that are already under development and which have projected earnings. So if I have an idea for a way to make a motor vehicle more environmentally friendly, I have to somehow fund enough research (on a pension) to prove that I can produce the whole thing (i.e. a prototype) and then have done all the market research (again, on a pension) to prove that people will flock to it and buy it in droves, despite any future further economic downturns or disasters? And I'm supposed to do this so that I can prove to that department that I don't need them? Please to be off-buggering now, government department.
And thanks for all the fish, cos it sure wasn't any help!
Today I think I experienced the ultimate "innovation" oxymoron. I went to an Australian Government department whose sole function is to nurture innovative ideas, and got shunted around Australia a bit, then finally told that they don't actually deal in innovation, but in fully-formed innovative business ideas. That was the main gist of the conversation I had with the various people on the phone. "We're not actually interested in innovation so much as in the business making money from innovation..."
There are research grants, too, yes, I nearly forgot. But for ideas and concepts that are already under development and which have projected earnings. So if I have an idea for a way to make a motor vehicle more environmentally friendly, I have to somehow fund enough research (on a pension) to prove that I can produce the whole thing (i.e. a prototype) and then have done all the market research (again, on a pension) to prove that people will flock to it and buy it in droves, despite any future further economic downturns or disasters? And I'm supposed to do this so that I can prove to that department that I don't need them? Please to be off-buggering now, government department.
And thanks for all the fish, cos it sure wasn't any help!
Monday, May 18, 2009
Tuesday, May 12, 2009
ORBs #1
Occasional Random Brainpharts (ORBs) seems to be a way to go - so many news articles that start me on a train of thought, interrupted, that isn't enough for a decent article, but is worthy of mention. So I'll start collecting a few and see how I go.
Random thoughts #1:
Why does Hubble need grapple hooks? Repelling Somalian Space Pirates? Or are we hoping to snag a Vogon construction vessel?
Random thoughts #2:
Maybe the Renegade Scout ship will come from a place named by Venetia Phair.
Random thoughts #3:
Do bras need grapple hooks, or is that just what we males do?
Random thoughts #4:
For heaven's sake, why bother with the bomb body in this case? Why not just drop the capsules directly from a hopper? Save all that extra manufacturing? And, lastly, neatly looping this thread, maybe these capsules could benefit from some form of grapple hook to anchor them to the ground...
Random thoughts #1:
Why does Hubble need grapple hooks? Repelling Somalian Space Pirates? Or are we hoping to snag a Vogon construction vessel?
Random thoughts #2:
Maybe the Renegade Scout ship will come from a place named by Venetia Phair.
Random thoughts #3:
Do bras need grapple hooks, or is that just what we males do?
Random thoughts #4:
For heaven's sake, why bother with the bomb body in this case? Why not just drop the capsules directly from a hopper? Save all that extra manufacturing? And, lastly, neatly looping this thread, maybe these capsules could benefit from some form of grapple hook to anchor them to the ground...
Monday, May 4, 2009
Another Day, Another Lying Company
Hmm - I wonder why people have lost faith in just about every business ever... Latest assholes are a VOIP/phone service provider who trumpet "No more line rental fees!" all over their sign-up page. Their plans start from $9.95 a month.
Hang on - a month? So you actually still do charge a monthly fee (that any other telco calls the line rental fee) but without the messy business of actually having to provide, you know, a line... Talk about slimy.
But I'm fair, even when the company appears not to be. I emailed them for some information on the "no line rental fees!" thing. Did this mean that $9.95 was available for call charges?
Anyway - I guess those ads are back in force. Stay tuned for more as I think of them...
Hang on - a month? So you actually still do charge a monthly fee (that any other telco calls the line rental fee) but without the messy business of actually having to provide, you know, a line... Talk about slimy.
But I'm fair, even when the company appears not to be. I emailed them for some information on the "no line rental fees!" thing. Did this mean that $9.95 was available for call charges?
The monthly plans do not include their fees as call credit. So on the $9.95 plan this does not mean it includes $9.95 of calls each month. You do however get access to lower rates and in some cases free calls. The plans includes a standard local Australian phone number which allows incoming calls from any telephone service. There is no contract so you can change plans or cancel at any time.Hmmm. Just because you call it something different, VOIPco, doesn't mean it's not the same thing. Please stop lying to me, my IQ is above the "average" of 110. (And isn't that another ad that shits you? The average IQ is, by definition, 100. If you got that, and decided not to go get your IQ tested by such idiots, then you've automatically passed...)
Anyway - I guess those ads are back in force. Stay tuned for more as I think of them...
Monday, April 27, 2009
Aussie Impeller Propels Itself Overseas
I do like this new impeller in this video, it's quite an achievement. Jay Harman developed it, sounds to me like he might be an Aussie. His company PAX Scientific seems to be offshore, but I may be wrong. I wouldn't be surprised though to see PAX overseas, because in Australia we've traditionally discouraged the innovators and forced them to go somewhere else to develop the idea. Here's another piece of info on the PAX Lily Impellers. It also seems to imply that PAX Sci and Jay Harman are not in Oz any more, Toto...
You have to love that we give the rest of the world so many good minds and killer developments...
You have to love that we give the rest of the world so many good minds and killer developments...
Wednesday, April 15, 2009
Head In Clouds, Feet On Ground.
Cloud Computing. Who hasn't quailed at hearing that term, over and over, for the last year? It's also not so much the words themselves, as the opinions for and against, all given as Every Absolute Truth You Always Wanted To Know About Cloud Computing But Were Afraid To Ask. And to me the emphasis seems to be on the "Every" rather than the "Absolute." No-one really knows how well it's going to work, who's going to be financing and making the clouds, how well Cloudsourcing Company A's cloud is going to interact with Cloudsourcing Company B's cloud, and whether it will be the answer to your prayers or not.
I'm definitely not an expert on any of that. When I was working I found it was hard enough to keep up with my own server rooms and remote disaster recovery facilities. I did, however, find time to play with virtualisation and cluster computing. And improved a few places I worked at, I might add. I can and will add my voice on the subject of CC.
To begin with, I think CC will benefit small enterprises and home businesses the most. If you have a LAN and a broadband router going at ADSL2+ speeds, then you will gain a cloudy server room out there, without the cost of owning hardware, the ongoing air conditioning and electricity and consumed space costs. They'll probably be the first to entrust data and logins to the cloud, the first to use cloudsourced computing power to get their product out to their public.
Home users too will be able to see some gains. Storage, for a start. How wikkid to stash your files *here* and then go *there* and be able to work on those same files? Yeah, totally. And using a legit copy of Microsoft Office online versus risking being caught with a "liberated" version? Priceless. (Or not... %)
But established businesses with more than a few servers will find that it's less percent of gain to them. Because, you will no doubt prefer to keep local logins, for the time being at least. And the most sensitive files - that's human nature, after all. In almost all cases, I see the IT staff will be the ones pushing for this and trying to make a business case for CC. Management just isn't (sorry CEOs and other ULM) very savvy about IT, because if they were, they'd be in IT instead of management. It's like saying that because a person is skilled at using surfboards, they are great at making them. Or vice versa. People have skill sets.
I had to "sneak" virtual computing into the network, find an old clunker server and slip VMware onto it, then install a server virtual machine and use it for a non-core service, then find another non-critical server service and slip that onto the same machine, until I could reliably switch between the mission-purposed machine and the VM without a hassle. Then I switched the mission-purposed machines off and waited for one of the random inspections to reveal that we were using less machines and electricity and still fulfilling all our IT needs...
I'd tried to get official sanction for VM technology, but it kept getting put off and put off. Never outright forbidden mind you, and so I took the initiative. But I figure CC will need to pull much the same stunt.
There's a few other questions that everyone who wants to use CC will need to answer. Who's going to pay for this? Will it cost you a significant outlay and significant ongoing costs, more than half of what you'd expect to pay for keeping these services locally? Because then, I can see that given the other drawbacks of CC, you might be better off keeping those services locally.
Will there be a significant security risk for you? How sensitive is the service and associated data for you? Because, in the case of an attack on a local server cluster after your customer data, for example, your IT department can, as a last resort, physically pull the network cable to that cluster. I "pulled the plug" on our entire internet connection when the company webserver farm was hit by one of the first ever DDoS attacks back in the early days, saved us a bundle because in those days bandwidth was expensive.
If someone is cracking into your data in a cloudsourced situation, you don't get the option to shut that service down. Indeed, if a critial set of CC servers goes down you don't even get the choice to call your IT staff and have them do a callout.
Sorry if this is an overly simplistic view of CC - but I'm often an overly simplistic person when it comes to those things, and it's generally stood me in good stead. I'm often overly conscious of security or reliability implications that my management considered too slim a chance to worry about - and invariably were glad they'd let me plug anyway... And I can see so many points of failure and breach in CC that it makes my eyes water and my palms sweat.
All that said, I do use Google's mail and apps almost exclusively, with Star Office locally, and I use several other software services "out there" to store and synchronise files, bookmarks, and photographs. I use services "out there" to collaborate with other people and run projects. I just make sure I have some form of local backup, is all...
I'm definitely not an expert on any of that. When I was working I found it was hard enough to keep up with my own server rooms and remote disaster recovery facilities. I did, however, find time to play with virtualisation and cluster computing. And improved a few places I worked at, I might add. I can and will add my voice on the subject of CC.
To begin with, I think CC will benefit small enterprises and home businesses the most. If you have a LAN and a broadband router going at ADSL2+ speeds, then you will gain a cloudy server room out there, without the cost of owning hardware, the ongoing air conditioning and electricity and consumed space costs. They'll probably be the first to entrust data and logins to the cloud, the first to use cloudsourced computing power to get their product out to their public.
Home users too will be able to see some gains. Storage, for a start. How wikkid to stash your files *here* and then go *there* and be able to work on those same files? Yeah, totally. And using a legit copy of Microsoft Office online versus risking being caught with a "liberated" version? Priceless. (Or not... %)
But established businesses with more than a few servers will find that it's less percent of gain to them. Because, you will no doubt prefer to keep local logins, for the time being at least. And the most sensitive files - that's human nature, after all. In almost all cases, I see the IT staff will be the ones pushing for this and trying to make a business case for CC. Management just isn't (sorry CEOs and other ULM) very savvy about IT, because if they were, they'd be in IT instead of management. It's like saying that because a person is skilled at using surfboards, they are great at making them. Or vice versa. People have skill sets.
I had to "sneak" virtual computing into the network, find an old clunker server and slip VMware onto it, then install a server virtual machine and use it for a non-core service, then find another non-critical server service and slip that onto the same machine, until I could reliably switch between the mission-purposed machine and the VM without a hassle. Then I switched the mission-purposed machines off and waited for one of the random inspections to reveal that we were using less machines and electricity and still fulfilling all our IT needs...
I'd tried to get official sanction for VM technology, but it kept getting put off and put off. Never outright forbidden mind you, and so I took the initiative. But I figure CC will need to pull much the same stunt.
There's a few other questions that everyone who wants to use CC will need to answer. Who's going to pay for this? Will it cost you a significant outlay and significant ongoing costs, more than half of what you'd expect to pay for keeping these services locally? Because then, I can see that given the other drawbacks of CC, you might be better off keeping those services locally.
Will there be a significant security risk for you? How sensitive is the service and associated data for you? Because, in the case of an attack on a local server cluster after your customer data, for example, your IT department can, as a last resort, physically pull the network cable to that cluster. I "pulled the plug" on our entire internet connection when the company webserver farm was hit by one of the first ever DDoS attacks back in the early days, saved us a bundle because in those days bandwidth was expensive.
If someone is cracking into your data in a cloudsourced situation, you don't get the option to shut that service down. Indeed, if a critial set of CC servers goes down you don't even get the choice to call your IT staff and have them do a callout.
Sorry if this is an overly simplistic view of CC - but I'm often an overly simplistic person when it comes to those things, and it's generally stood me in good stead. I'm often overly conscious of security or reliability implications that my management considered too slim a chance to worry about - and invariably were glad they'd let me plug anyway... And I can see so many points of failure and breach in CC that it makes my eyes water and my palms sweat.
All that said, I do use Google's mail and apps almost exclusively, with Star Office locally, and I use several other software services "out there" to store and synchronise files, bookmarks, and photographs. I use services "out there" to collaborate with other people and run projects. I just make sure I have some form of local backup, is all...
Tuesday, April 14, 2009
Space Tech Comparison
We've come a long way. I mean, just look at this, which you can find among other things here, and which are to be auctioned off next month. I remember as a kid playing with components that large, that clumsy, and that simple.
That's a few transistors in those cylindrical shields/heatsinks, seven of them by the looks of it. A modern CPU chip contains hundreds of thousands of transistors on a single sliver of silicon...
And you think to yourself "OMG! We sent people into space. With equipment like THAT..."
Scarily, we now have spacecraft with modern technology and equipment inside, and they don't seem to fly half as well nor fulfill their missions half as often...
Monday, April 6, 2009
Broadband Initiative 2.0
With gardening taking up most of my productive time (and leaving me pretty stuffed afterwards and basically cbf to blog) I've been a bit lax in feeding stories into the blogs. But also in my defense, there hasn't been much that made me want to reach for the keyboard, either....
Remember when I worked out that Telstra's bid for the national broadband initiative showed that they had been fully expecting to make several billions in clear profit from their shabbily prepared tender? Reading this shows me that there's a far deeper problem involved.
It looks like every company involved in the tender thought they were going to clear billions and billions in profit... And what can that be traced back to? Well, I think Telstra's lazy insolent smugness at having been the monopolistic sole telco for Australia for so long, has rubbed off onto every other telco they've been involved in. It's always been easy to stay under Telstra's prices, but even easier to keep prices high and blame Telstra for their monopoly pricing to other telcos.
Sol Trujillo leaving has been one of the best things to happen to Telstra - to date. I won't say it was the best thing ever because now we need to watch what the new incumbent(s) do. Letting other companies lay their own fibers has also been one of the best things to date - and one of the worst.
Because many of those telcos that laid in fiber are overseas companies, and while it seems inevitable that we'll soon have to think in terms of one global economy, that time isn't quite here yet. (It should be, but it's not. Long story...) So for the moment our choice is to bleed our dollars into Telstra's pockets, or into the pockets of an Asian telco cartel, or something. When Telstra was still a government-owned and operated department, we probably didn't mind so much because we were kind of keeping it in the family.
Now, it's unacceptable. It's distasteful, and even worse, it's now put us somewhere in the range of third world countries when it comes to price of digital access. I'd like to bring a quick observation to the table. When I was running an ISP back in the mid 90's, data access was via ISDN lines, and our clients accessed our server via 33K dialup modems. The cost of the ISDN data was actually quite reasonable.
Then Telstra realised in the late 90's that it was sitting on an untapped goldmine, and the price went up from 9c to 19c a meg and only a few years after that, the ISP sold to another ISP. But think about that for the moment - if you have a 5Gb a month cap on your downloads, then at those prices you'd be paying $950 a month for that data. Before allowing for inflation, which would make your download limit worth about $3500 in real terms,,,
And that 19c/meg figure is still around to this day. Yep, it's just migrated to GPRS data rates. So people with mobile phones with no 3G/HSDPA capability can look forward to big bills month after month, and really, data plans for mobile broadband aren't anywhere near realistic yet either. While landline customers check their price in Gb, (and find that they are paying perhaps $8/Gb) mobile broadband users are still ekeing out their data in Mb's and finding that they are paying 3c/Mb or about $35/Gb on average, or four times as much as the landline customer.
And most of that is down to pure price gouging and "socially acceptable" price fixing between the telcos. Because that's the way it's been done "since Telstra days..."
I've noticed something else. In eight years of having broadband, at three different places in a capital city and one in immediately surrounding areas, I've NEVER had acceptable performance from the existing broadband infrastructure. Currently being in the "surrounding area" I have a 512K connection of old style ADSL which has a latency delay of about 5 - 25 seconds on everything. At previous locations I've had ADSL and ADSL2 connections that barely managed 200K, or which disconnected and reconnected every hour due to line noise. You get the idea.
The best performance I EVER got was a one month love affair with mobile broadband, where I was getting 1.5M speeds with almost no latency - but paying $30 a month for 1Gb and then exorbitant prices once I went over that...
So when K-Rudd talks in expansive terms about "national broadband networks" that will solve all our woes, I wish I could be as certain of that myself... See, amid fears that the Internet is at the limit of what can be done with TCP-IP4 protocol, and we're approaching the bandwidth limits of what the network can sustain, I know that we will always find a way to make infrastructure do more with less, and also we will continue to pump infrastructure into the data economy. But will our "national broadband network" be the right technology for that? They are, after all, touting a seven to eight year timespan and a LOT changes in IT in just 18 months...
So let's hope that the new "conglomerate" doesn't A) decide it's Telstra 2.0 and plans to make huge profits this round of tenders, B) decide it's Telstra 2.0 and can become a new monopoly and C) decide it's Telstra 2.0 and deliver Internet 0.1 standards...
Remember when I worked out that Telstra's bid for the national broadband initiative showed that they had been fully expecting to make several billions in clear profit from their shabbily prepared tender? Reading this shows me that there's a far deeper problem involved.
It looks like every company involved in the tender thought they were going to clear billions and billions in profit... And what can that be traced back to? Well, I think Telstra's lazy insolent smugness at having been the monopolistic sole telco for Australia for so long, has rubbed off onto every other telco they've been involved in. It's always been easy to stay under Telstra's prices, but even easier to keep prices high and blame Telstra for their monopoly pricing to other telcos.
Sol Trujillo leaving has been one of the best things to happen to Telstra - to date. I won't say it was the best thing ever because now we need to watch what the new incumbent(s) do. Letting other companies lay their own fibers has also been one of the best things to date - and one of the worst.
Because many of those telcos that laid in fiber are overseas companies, and while it seems inevitable that we'll soon have to think in terms of one global economy, that time isn't quite here yet. (It should be, but it's not. Long story...) So for the moment our choice is to bleed our dollars into Telstra's pockets, or into the pockets of an Asian telco cartel, or something. When Telstra was still a government-owned and operated department, we probably didn't mind so much because we were kind of keeping it in the family.
Now, it's unacceptable. It's distasteful, and even worse, it's now put us somewhere in the range of third world countries when it comes to price of digital access. I'd like to bring a quick observation to the table. When I was running an ISP back in the mid 90's, data access was via ISDN lines, and our clients accessed our server via 33K dialup modems. The cost of the ISDN data was actually quite reasonable.
Then Telstra realised in the late 90's that it was sitting on an untapped goldmine, and the price went up from 9c to 19c a meg and only a few years after that, the ISP sold to another ISP. But think about that for the moment - if you have a 5Gb a month cap on your downloads, then at those prices you'd be paying $950 a month for that data. Before allowing for inflation, which would make your download limit worth about $3500 in real terms,,,
And that 19c/meg figure is still around to this day. Yep, it's just migrated to GPRS data rates. So people with mobile phones with no 3G/HSDPA capability can look forward to big bills month after month, and really, data plans for mobile broadband aren't anywhere near realistic yet either. While landline customers check their price in Gb, (and find that they are paying perhaps $8/Gb) mobile broadband users are still ekeing out their data in Mb's and finding that they are paying 3c/Mb or about $35/Gb on average, or four times as much as the landline customer.
And most of that is down to pure price gouging and "socially acceptable" price fixing between the telcos. Because that's the way it's been done "since Telstra days..."
I've noticed something else. In eight years of having broadband, at three different places in a capital city and one in immediately surrounding areas, I've NEVER had acceptable performance from the existing broadband infrastructure. Currently being in the "surrounding area" I have a 512K connection of old style ADSL which has a latency delay of about 5 - 25 seconds on everything. At previous locations I've had ADSL and ADSL2 connections that barely managed 200K, or which disconnected and reconnected every hour due to line noise. You get the idea.
The best performance I EVER got was a one month love affair with mobile broadband, where I was getting 1.5M speeds with almost no latency - but paying $30 a month for 1Gb and then exorbitant prices once I went over that...
So when K-Rudd talks in expansive terms about "national broadband networks" that will solve all our woes, I wish I could be as certain of that myself... See, amid fears that the Internet is at the limit of what can be done with TCP-IP4 protocol, and we're approaching the bandwidth limits of what the network can sustain, I know that we will always find a way to make infrastructure do more with less, and also we will continue to pump infrastructure into the data economy. But will our "national broadband network" be the right technology for that? They are, after all, touting a seven to eight year timespan and a LOT changes in IT in just 18 months...
So let's hope that the new "conglomerate" doesn't A) decide it's Telstra 2.0 and plans to make huge profits this round of tenders, B) decide it's Telstra 2.0 and can become a new monopoly and C) decide it's Telstra 2.0 and deliver Internet 0.1 standards...
Sunday, March 22, 2009
Soft Robotics - The Right And Wrong
"Soft robotics" being something like this article. Watch the video, then see if you spot three things I spotted right away. I thought of all three in the space of the last 20 seconds of the movie.
Maybe that's because I've already used this method in an experiment I carried out, 25 years ago. Yep, quarter of a century ago...
See, I got hold of an experimenter's kit of that Nitinol memory wire. Nitinol, for anyone that never heard of it, was the original "memory metal" that you heated, bent into a shape, then allowed to cool in that shape. Now you could bend it or stretch it to another shape, and when you heated it again, it resumed the first shape it had "remembered." In the case of the experimenter kit, the wire just shrank in length when it was heated. Looking at the "tentacle" that thet were demonstrating, you can see where this was going can't you?
First point, and a major point of difference between their clumsy thing and mine - why use four "muscles" ? Unless your aim was to exactly mimic and octopus' tentacle, in which case they were still short of the central sheath muscle. My "gripper" used three "muscles" made of Nitinol, which pretty much used up my precious stock of the stuff.
Second point: Using "draw wire" technology is not going to be the same as using this stuff I wrote about in the previous article. The "muscle" fibers can be embedded in the structure of the flexible limb, meaning more precision about where the force is applied
Last point. They are only worrying about a cable-like muscle in tractive mode. But big advances have also been made in tubular fibres which inflate and shorten through the use of compressed air or liquid. I'm betting it's just a matter of aligning or "weaving" these new carbon fibre muscles and you can mimic the compressive/shortening mode that biological muscles use.
So - if they thought about embedding three bundles of traction-mode fibres with three bundles of compression mode fibres alternating, they could pretty much mimic the octopus' tentacle. If you include with this a central "core muscle" filled with liquid, and tapered the "muscle" ends and the arm body, you would have something that would be considerably better than a tentacle.
Oh - and the reason I didn't try doing more with my Nitinol "gripper?" Because the wire got hot, it would tend to cut into the plastics I had access to after a while. Silicon rubber tubing was too expensive for me to use, and very soft so the wire would have cut into it too, anyway. I gave it away as an idea too far ahead of the technology at the time. Now I wish I had access to some of the new materials that these guys have...
Maybe that's because I've already used this method in an experiment I carried out, 25 years ago. Yep, quarter of a century ago...
See, I got hold of an experimenter's kit of that Nitinol memory wire. Nitinol, for anyone that never heard of it, was the original "memory metal" that you heated, bent into a shape, then allowed to cool in that shape. Now you could bend it or stretch it to another shape, and when you heated it again, it resumed the first shape it had "remembered." In the case of the experimenter kit, the wire just shrank in length when it was heated. Looking at the "tentacle" that thet were demonstrating, you can see where this was going can't you?
First point, and a major point of difference between their clumsy thing and mine - why use four "muscles" ? Unless your aim was to exactly mimic and octopus' tentacle, in which case they were still short of the central sheath muscle. My "gripper" used three "muscles" made of Nitinol, which pretty much used up my precious stock of the stuff.
Second point: Using "draw wire" technology is not going to be the same as using this stuff I wrote about in the previous article. The "muscle" fibers can be embedded in the structure of the flexible limb, meaning more precision about where the force is applied
Last point. They are only worrying about a cable-like muscle in tractive mode. But big advances have also been made in tubular fibres which inflate and shorten through the use of compressed air or liquid. I'm betting it's just a matter of aligning or "weaving" these new carbon fibre muscles and you can mimic the compressive/shortening mode that biological muscles use.
So - if they thought about embedding three bundles of traction-mode fibres with three bundles of compression mode fibres alternating, they could pretty much mimic the octopus' tentacle. If you include with this a central "core muscle" filled with liquid, and tapered the "muscle" ends and the arm body, you would have something that would be considerably better than a tentacle.
Oh - and the reason I didn't try doing more with my Nitinol "gripper?" Because the wire got hot, it would tend to cut into the plastics I had access to after a while. Silicon rubber tubing was too expensive for me to use, and very soft so the wire would have cut into it too, anyway. I gave it away as an idea too far ahead of the technology at the time. Now I wish I had access to some of the new materials that these guys have...
Artificial Muscle Molto Strong, Tough
Cyberdyne muscles (http://i.gizmodo.com/5176213/these-carbon-nanotube-muscles-are-30-times-stronger-than-human-muscles) are getting closer...
This is one of those breakthroughs that will require more breakthroughs before we can use it to full effect in a human being - imagine a muscle ten times stronger and a thousand times faster responding, attached to your relatively brittle skeleton... So certain amounts of skeleton modification will also be needed. I think I mentioned this problem way back in the archives section of the blog, how replacing muscles would snap skeleton bones.
The less obvious problem will be in the relearning process - imagine a muscle that *snaps* to the new position a thousand times faster than your original muscle did - your nervous system wouldn't be able to cope, and before you'd had a chance to realise that the muscle had started moving, it would have arrived, overshot the mark, and kept going to the point of twisting a limb right off. Or smacking yourself in the head. So nerves (which conduct at relatively low speeds, which is why reflexes are such slow things) would need to be replaced.
A robot powered by this, on the other hand, would be able to move with lightning speed, precision, and reflexes... Making this best suited to robotics for the time being. But bear in mind that I've also predicted that it won't be long now before we can transfer an entire human "mind" like software, so the robot may not be as dumb as planned...
This is, then, a really great advance, providing us with a new technology for medicine and robotics, but it will need a lot of supporting technology to make it applicable to medicine.
This is one of those breakthroughs that will require more breakthroughs before we can use it to full effect in a human being - imagine a muscle ten times stronger and a thousand times faster responding, attached to your relatively brittle skeleton... So certain amounts of skeleton modification will also be needed. I think I mentioned this problem way back in the archives section of the blog, how replacing muscles would snap skeleton bones.
The less obvious problem will be in the relearning process - imagine a muscle that *snaps* to the new position a thousand times faster than your original muscle did - your nervous system wouldn't be able to cope, and before you'd had a chance to realise that the muscle had started moving, it would have arrived, overshot the mark, and kept going to the point of twisting a limb right off. Or smacking yourself in the head. So nerves (which conduct at relatively low speeds, which is why reflexes are such slow things) would need to be replaced.
A robot powered by this, on the other hand, would be able to move with lightning speed, precision, and reflexes... Making this best suited to robotics for the time being. But bear in mind that I've also predicted that it won't be long now before we can transfer an entire human "mind" like software, so the robot may not be as dumb as planned...
This is, then, a really great advance, providing us with a new technology for medicine and robotics, but it will need a lot of supporting technology to make it applicable to medicine.
Saturday, March 14, 2009
Find Your Kindle 2 PID, Use It As You See Fit!
Stuff Amazon.com! They can't stop you doing with the hardware which you bought as you wish, because insisting you only load it with amazon.com content is like Toyota making a car that only take Toyota brand petrol.
Mind you, no-one would be stupid enough to buy a car that only ran on one specific brand of petrol, so what does that say about you, Kindle2-buying elitist tosser?
Never mind - find yourself a link to the kindlepid.py script here, let Amazon try and issue me a fucking takedown notice!
Mind you, no-one would be stupid enough to buy a car that only ran on one specific brand of petrol, so what does that say about you, Kindle2-buying elitist tosser?
Never mind - find yourself a link to the kindlepid.py script here, let Amazon try and issue me a fucking takedown notice!
Wednesday, March 11, 2009
C-Zen Fun
I like this EV much more betterer... As I say often, EVs currently all look like they derived from the same perverse desire to make the driver look like a pretentious tasteless prat, at least this buggy has a touch of style about it. Trust the French.
Aside from one thing, I like it. Reliance on Li-ion technology makes this a less eco-friendly idea. Time to look at other ways to store and use that energy.
Aside from one thing, I like it. Reliance on Li-ion technology makes this a less eco-friendly idea. Time to look at other ways to store and use that energy.
Monday, March 2, 2009
Secure Access With Devices
Skribe picked up this nice piece of footage that shows almost precisely what I find wrong with digital technology - it's stuck in the box. The video is nice, if somewhat MS-centric.
But here's the problem: I have three computer machines at home, and one of them is on generally anytime I am, often two or sometimes even all three. I have basic remote control between the machines by VNC, LogMeIn, and Synergy. This is all 3rd party software (and in LMI's case, a 3rd party server) that I need to run in order to just share myself among the computers. It's clumsy. I have to log into each of the machines, run the relevant software, then go to the machine I want to control others from, run the master software of whichever instance I'm using, and then find the controlled machine and log in again.
Each of these softwares that I mentioned does something different. VNC works within my network, LogMeIn is better for finding machines that are farther afield on the network, or when I'm farther afield and need to get back to one of the machines at home. These two bring the remote machine's screen to my screen so I can treat the remote computer like an application. (But with provisos, read on.) Synergy is different in that it transfers my mouse movements and keystrokes to the controlled machine, but uses the screen on that machine as the display. This makes it suitable for times when I have my laptop and PC side by side and only got room for one keyboard and mouse, and don't want to use a manual mechanical KVM (Keyboard Video Mouse) switch.
I've become quite adept at picking and flicking between these remote control apps depending on situation, and to some degree they offer functionality I would hate to have to live without. I can be sitting here in my kitchen using the laptop, pop open a VNC terminal to the PC which acts as a simple server and media PC, and change the volume of the music it's playing through my lounge room speakers. I can close the streaming audio and instead open up a video which is relevant to what I'm working on. (I could do much the same using Synergy, if my eyesight were good enough to read the other screen at 8 meters distance, VNC just brings things up close in that aforementioned local application window.)
But now I want to drag a video I'm watching here on my laptop to the PC, and the troubles start... If I drag the video and drop it onto the window which holds the remote session, everything stops. If I want to drag a tab from my Chrome browser on the laptop to the Chrome browser (or any other browser actually) on the PC, things come to a halt. And let's not even talk about what happens when I want to edit a document that resides on the PC using the Open Office Writer here on the laptop...
One reason for this supremely useless behaviour is that we don't have any means of authentication of the user. We're distrustful of who might be accessing our data via remote control, so it's walled-in pretty comprehensively. "No you may NOT have this file, I don't know who you are!" even though I might be logged in at the laptop 8 meters away and the laptop is happy with my credentials that doesn't mean a thing to the PC. Yet if there was a shred of intelligence built into the OS's, they would accept each other's validation (maybe after making ONE manually guided connection between my login creds on the laptop and the login creds on the PC) and realise that the laptop is portable and may be found accessing the network from different physical locations, so there will need to be some additional validation done.
The other is that there's no secure way to network the two machines - I'm effectively on a public medium. So I need to establish a VPN tunnel using another third party software such as Hamachi. The list of things that my operating systems don't do is phenomenal. Do I actually care if a window closes or animates down to a pinpoint and winks out of existence? Not a helluva lot. But ask me about portable login and applications and I'd be all ears and a deeper wallet...
Anyhow - back to the validation process. Then it becomes just a matter of realising which machines a person is using, establish some kind of pattern to their use of those machines, and their login and validation credentials on each machine. IPv6 and other network identification and location schemes will make it easier to establish where the user is, and that will give clues. It should be possible to establish a few things that I can and can't do, such as fly at lightspeed, use a machine that's known to be secured to someone else's use only, or a machine registered to a corporation or network that I'm not a member of.
If I've logged in from somewhere in Mandurah at 0800hrs, there is virtually nil chance that I will be logging in from Hillarys Boat Harbor at 0815hrs. Just to make sure there isn't a super high speed transit between the two points and it could be me after all, a second layer of security can be activated. "Please provide a biometric, the password to your diary software, and tell me what you removed from the refrigerator this morning." The latter works if you have a fridge that scans barcodes on the way in and out,but it's just one example. The security system could as easily ask you which online news site you visited, or what was the last email you remember reading. (Don't forget that your "network" of machines can communicate with each other and ask one another questions like this - so the PC you sit at in the Library can, once you've established identity, query your server at home, or a machine you used recently at your office, for some validation detail.)
In this way you have opened a "security tunnel" between the two machines. (Or three machines, or four, or a whole server room full of them if that's how you roll.) You can use this to also establish and encrypt the network VPN tunnel I mentioned earlier is needed, so that the VPN becomes keyed to your current identification details. If you move to another machine and validate properly, then the VPN key fits and you have a tunnel to/from that machine also. As long as the new machine is within a logically attainable distance of the last machine you logged in from, of course. And not located in some Internet cafe or public access terminal or other place that's denied to your login.
And now that you have the VPN tunnel, there should be absolutely no reason whatsoever that I can't drag a file from the server to the word processor on the remote machine, edit it, and save it back to the server. Or drag a URL from the browser here to a favourites folder over there. Yes, I appreciate that the word processor on the remote machine may be compromised, and could wreak havoc.
Yes, the browser I'm using could attach 140 extra characters to the URL in order to lead my server to a zombiemaster website. But those risks are inherent everywhere and can happen anytime to any machine anyplace. Security needs to be a bit more flexible. Once I have that tunnel established, I can send software to intercept every application my authenticated user opens and scan it on the fly - do I trust it? - and make the call then as to whether to accept the file or reject it and suggest sending it in a more basic format.
If necessary I could even run my own word processor from my secure home server on this terminal I'm using via the VPN - it's not just data that should be possible to flow from one machine to the next - and that way I'm never using a compromised application.
The video makes it clear that Microsoft at least thinks they have some of those problems sorted out, and shows windows flying from one device to the next without frontiers or borders. I would have been happy to see one "Access Denied - this is a public resource" warning or something, but I'll presume they did think of these things.
And I'm standing by my statement that eventually the difference between a computer and anything else (such as a key tag or a door) will blur and become kind of indistinct. And when that does happen, and you want to store your data and distribute it out to your workplace, your coffee shop, your vehicle, your prospective customer's place, your entertainment wall in your home - it will always be riding shotgun on who you are, where you are, and if this is what you decided earlier is appropriate use of the data.
Once this happens you can get to the happy situation where you carry a data card with you and it unlocks relevant content on any public machines you are walking by, through RFID or cellphone technology, and if your wallet is stolen, you can establish new credentials through the security mechanism, and the instant that you do, your stolen ID's (which your security software knows were all in the same wallet that just got stolen) all become inactive at once.
In fact, if the thief tries to use them even once in a manner that raises suspicion in the security system and is unable to validate using one of the other mechanisms such as biometric or knowing your last email, then that entire cluster of access devices is flagged as compromised until you validate them again.
And of course, a whole class of currently annoying and bothersome exploits such as botnet herding become several orders of magnitude more difficult to do, because no-one can be in two places at once, and botnets are by definition geographically disparate. So things like spam also become difficult, because each one is now identified by your particular validated ID.
You can't be sending an email from several hundred machines at once, nor sending to millions of "friends" at once. Ditto for spammed Twitter messages, or IM's or even SMS spam - if it doesn't make sense, and if you did subvert a machine locally to do it, the other machines along the chain will realise that you can't do this and silently stem it.
I wonder what else is brewing in the very near future?
But here's the problem: I have three computer machines at home, and one of them is on generally anytime I am, often two or sometimes even all three. I have basic remote control between the machines by VNC, LogMeIn, and Synergy. This is all 3rd party software (and in LMI's case, a 3rd party server) that I need to run in order to just share myself among the computers. It's clumsy. I have to log into each of the machines, run the relevant software, then go to the machine I want to control others from, run the master software of whichever instance I'm using, and then find the controlled machine and log in again.
Each of these softwares that I mentioned does something different. VNC works within my network, LogMeIn is better for finding machines that are farther afield on the network, or when I'm farther afield and need to get back to one of the machines at home. These two bring the remote machine's screen to my screen so I can treat the remote computer like an application. (But with provisos, read on.) Synergy is different in that it transfers my mouse movements and keystrokes to the controlled machine, but uses the screen on that machine as the display. This makes it suitable for times when I have my laptop and PC side by side and only got room for one keyboard and mouse, and don't want to use a manual mechanical KVM (Keyboard Video Mouse) switch.
I've become quite adept at picking and flicking between these remote control apps depending on situation, and to some degree they offer functionality I would hate to have to live without. I can be sitting here in my kitchen using the laptop, pop open a VNC terminal to the PC which acts as a simple server and media PC, and change the volume of the music it's playing through my lounge room speakers. I can close the streaming audio and instead open up a video which is relevant to what I'm working on. (I could do much the same using Synergy, if my eyesight were good enough to read the other screen at 8 meters distance, VNC just brings things up close in that aforementioned local application window.)
But now I want to drag a video I'm watching here on my laptop to the PC, and the troubles start... If I drag the video and drop it onto the window which holds the remote session, everything stops. If I want to drag a tab from my Chrome browser on the laptop to the Chrome browser (or any other browser actually) on the PC, things come to a halt. And let's not even talk about what happens when I want to edit a document that resides on the PC using the Open Office Writer here on the laptop...
One reason for this supremely useless behaviour is that we don't have any means of authentication of the user. We're distrustful of who might be accessing our data via remote control, so it's walled-in pretty comprehensively. "No you may NOT have this file, I don't know who you are!" even though I might be logged in at the laptop 8 meters away and the laptop is happy with my credentials that doesn't mean a thing to the PC. Yet if there was a shred of intelligence built into the OS's, they would accept each other's validation (maybe after making ONE manually guided connection between my login creds on the laptop and the login creds on the PC) and realise that the laptop is portable and may be found accessing the network from different physical locations, so there will need to be some additional validation done.
The other is that there's no secure way to network the two machines - I'm effectively on a public medium. So I need to establish a VPN tunnel using another third party software such as Hamachi. The list of things that my operating systems don't do is phenomenal. Do I actually care if a window closes or animates down to a pinpoint and winks out of existence? Not a helluva lot. But ask me about portable login and applications and I'd be all ears and a deeper wallet...
Anyhow - back to the validation process. Then it becomes just a matter of realising which machines a person is using, establish some kind of pattern to their use of those machines, and their login and validation credentials on each machine. IPv6 and other network identification and location schemes will make it easier to establish where the user is, and that will give clues. It should be possible to establish a few things that I can and can't do, such as fly at lightspeed, use a machine that's known to be secured to someone else's use only, or a machine registered to a corporation or network that I'm not a member of.
If I've logged in from somewhere in Mandurah at 0800hrs, there is virtually nil chance that I will be logging in from Hillarys Boat Harbor at 0815hrs. Just to make sure there isn't a super high speed transit between the two points and it could be me after all, a second layer of security can be activated. "Please provide a biometric, the password to your diary software, and tell me what you removed from the refrigerator this morning." The latter works if you have a fridge that scans barcodes on the way in and out,but it's just one example. The security system could as easily ask you which online news site you visited, or what was the last email you remember reading. (Don't forget that your "network" of machines can communicate with each other and ask one another questions like this - so the PC you sit at in the Library can, once you've established identity, query your server at home, or a machine you used recently at your office, for some validation detail.)
In this way you have opened a "security tunnel" between the two machines. (Or three machines, or four, or a whole server room full of them if that's how you roll.) You can use this to also establish and encrypt the network VPN tunnel I mentioned earlier is needed, so that the VPN becomes keyed to your current identification details. If you move to another machine and validate properly, then the VPN key fits and you have a tunnel to/from that machine also. As long as the new machine is within a logically attainable distance of the last machine you logged in from, of course. And not located in some Internet cafe or public access terminal or other place that's denied to your login.
And now that you have the VPN tunnel, there should be absolutely no reason whatsoever that I can't drag a file from the server to the word processor on the remote machine, edit it, and save it back to the server. Or drag a URL from the browser here to a favourites folder over there. Yes, I appreciate that the word processor on the remote machine may be compromised, and could wreak havoc.
Yes, the browser I'm using could attach 140 extra characters to the URL in order to lead my server to a zombiemaster website. But those risks are inherent everywhere and can happen anytime to any machine anyplace. Security needs to be a bit more flexible. Once I have that tunnel established, I can send software to intercept every application my authenticated user opens and scan it on the fly - do I trust it? - and make the call then as to whether to accept the file or reject it and suggest sending it in a more basic format.
If necessary I could even run my own word processor from my secure home server on this terminal I'm using via the VPN - it's not just data that should be possible to flow from one machine to the next - and that way I'm never using a compromised application.
The video makes it clear that Microsoft at least thinks they have some of those problems sorted out, and shows windows flying from one device to the next without frontiers or borders. I would have been happy to see one "Access Denied - this is a public resource" warning or something, but I'll presume they did think of these things.
And I'm standing by my statement that eventually the difference between a computer and anything else (such as a key tag or a door) will blur and become kind of indistinct. And when that does happen, and you want to store your data and distribute it out to your workplace, your coffee shop, your vehicle, your prospective customer's place, your entertainment wall in your home - it will always be riding shotgun on who you are, where you are, and if this is what you decided earlier is appropriate use of the data.
Once this happens you can get to the happy situation where you carry a data card with you and it unlocks relevant content on any public machines you are walking by, through RFID or cellphone technology, and if your wallet is stolen, you can establish new credentials through the security mechanism, and the instant that you do, your stolen ID's (which your security software knows were all in the same wallet that just got stolen) all become inactive at once.
In fact, if the thief tries to use them even once in a manner that raises suspicion in the security system and is unable to validate using one of the other mechanisms such as biometric or knowing your last email, then that entire cluster of access devices is flagged as compromised until you validate them again.
And of course, a whole class of currently annoying and bothersome exploits such as botnet herding become several orders of magnitude more difficult to do, because no-one can be in two places at once, and botnets are by definition geographically disparate. So things like spam also become difficult, because each one is now identified by your particular validated ID.
You can't be sending an email from several hundred machines at once, nor sending to millions of "friends" at once. Ditto for spammed Twitter messages, or IM's or even SMS spam - if it doesn't make sense, and if you did subvert a machine locally to do it, the other machines along the chain will realise that you can't do this and silently stem it.
I wonder what else is brewing in the very near future?
Saturday, February 28, 2009
Sometimes An Article Is Just Sh*t N Filler
Sometimes I wish people would cover a subject properly instead of just waffling. Reading this article has left me with a lasting and indelible impression that DC power transmission is just a load of shit. Comic book shit, but shit all the same. Aside from the title of the article being a 100% lie (the article explains nothing about HVDCT) it's actually counterproductive and creates the impression that treehuggers are all clueless hippie airheads - not a good image for eco aware people...
It's especially sad when there are these other issues to report on, new engine technology, and new (though still derivative-looking) electric cars. Oh and while I'm getting pissed off at Gizmodo-related sites - AAAARRRRGGGGHHHHH! to the annoying, stupid totally crass term "leccy tech" for EVs. Stupid asswipes that are using this echolalic term are doing more to damage the credibility of EVs than Ford and GM and Shell and BP combined, ffs.
It's especially sad when there are these other issues to report on, new engine technology, and new (though still derivative-looking) electric cars. Oh and while I'm getting pissed off at Gizmodo-related sites - AAAARRRRGGGGHHHHH! to the annoying, stupid totally crass term "leccy tech" for EVs. Stupid asswipes that are using this echolalic term are doing more to damage the credibility of EVs than Ford and GM and Shell and BP combined, ffs.
Thursday, February 26, 2009
Ion Drive made from a Coke Can
http://www.youtube.com/watch?v=vpEnnUHo9yg Unless it's total bullshit, this would appear to be a ver cheaply constructed ion drive.
Paint It Bend It Spray It On - The Electronics Revolution
A few years ago I remember reading about paintable, bendable electronics online. The article had a wistful, "wouldn't it be great if . . . ?" tone, and mentioned some "promising research" being done. Then a year ago flat flexible OLED displays were "imminent." In all these cases, the timeline was a few years into our future. And yet, here's the payoff:
It will be possible to pretty much put it anywhere. That includes your cutlery and crockery, a flat slab of styrofoam painted to look like a book, shopping bags, windows, the roof of your electric vehicle...
It'll very rapidly become cheap. That means you will find it on your shopping bags, your cutlery and crockery, and made into a lightweight e-book reader, as well as allowing your EV to "graze" on sunlight wherever you park it.
Once people see it in use, they will think of more uses for it. I'm pretty pedestrian in my predictions, I can see a time when it will even be on pills and medications you take, to target the drugs to the right places, and measure the effect the drugs are having and adjust the dosage on the fly. It will probably also be on your money (credit, debit, whatever) cards, furniture, and walls.
When electronics becomes this common and this invisible, and connectivity and intelligence can be built in, you're onto something that will change the world more than the Internet has. Combine this with something like the Phantom OS currently developed in Russia (and the resultant programming upheaval it will create) and you have a seriously BIG change in technology and how we and it interact.
On the subject of Phantom - (here is what seems to be their homepage, by the way) there are some programmers who will know this kind of "run in place" style of operating system like the backs of their hands - all the programmers who made programs for the Tandy M100 and Tandy M200 "laptop" computers of the '80's. Because they used a very similar way of operating - files and programs were written to nonvolatile memory and switching on and off just meant stopping in mid-read or whatever, and then restarting at that same point as though nothing had happened.
So devices using Phantom technology will start up instantly and carry on doing whatever they were doing when the power went off. Seems to be a boon for solar controllers and low-power monitoring and remote control devices to me. And of course for advertising signs that could steal the power of your mobile phone transmission to light themselves up only when there was someone with a cellphone nearby, or - hell, I'm sure you can see millions of uses for this technology...
As I used to say for my BBS, TEdLIVISION: "-- don't touch that dial!!!" - there are a whole load of new and interesting applications of this technology coming to a common item near you, and probably sooner than you think!
- http://www.treehugger.com/files/2009/02/plextronics-launches-d-line-shows-off-potential-of-solar-powered-lighting-technology.php Paint - on technology. Flexible.
- http://www.treehugger.com/files/2009/02/spray-on-solar-panels-could-be-sold-2011.php ...and spray on solar panels. Flexible.
It will be possible to pretty much put it anywhere. That includes your cutlery and crockery, a flat slab of styrofoam painted to look like a book, shopping bags, windows, the roof of your electric vehicle...
It'll very rapidly become cheap. That means you will find it on your shopping bags, your cutlery and crockery, and made into a lightweight e-book reader, as well as allowing your EV to "graze" on sunlight wherever you park it.
Once people see it in use, they will think of more uses for it. I'm pretty pedestrian in my predictions, I can see a time when it will even be on pills and medications you take, to target the drugs to the right places, and measure the effect the drugs are having and adjust the dosage on the fly. It will probably also be on your money (credit, debit, whatever) cards, furniture, and walls.
When electronics becomes this common and this invisible, and connectivity and intelligence can be built in, you're onto something that will change the world more than the Internet has. Combine this with something like the Phantom OS currently developed in Russia (and the resultant programming upheaval it will create) and you have a seriously BIG change in technology and how we and it interact.
On the subject of Phantom - (here is what seems to be their homepage, by the way) there are some programmers who will know this kind of "run in place" style of operating system like the backs of their hands - all the programmers who made programs for the Tandy M100 and Tandy M200 "laptop" computers of the '80's. Because they used a very similar way of operating - files and programs were written to nonvolatile memory and switching on and off just meant stopping in mid-read or whatever, and then restarting at that same point as though nothing had happened.
So devices using Phantom technology will start up instantly and carry on doing whatever they were doing when the power went off. Seems to be a boon for solar controllers and low-power monitoring and remote control devices to me. And of course for advertising signs that could steal the power of your mobile phone transmission to light themselves up only when there was someone with a cellphone nearby, or - hell, I'm sure you can see millions of uses for this technology...
As I used to say for my BBS, TEdLIVISION: "-- don't touch that dial!!!" - there are a whole load of new and interesting applications of this technology coming to a common item near you, and probably sooner than you think!
Sixth Sense - One Day It Will Be Internal.
PLEASE NOTE: I've been evangelising the coming of this technology for several years now. And here (http://tech.yahoo.com/news/afp/20090205/tc_afp/usitinternetresearchtedmit) it is.
Hang on, you're saying, these guys have produced a projector and smart stuff to overlay images on your surroundings. That's not the same as your NanoNeuroNet[(c)2000-2020 teddlesruss] technology.
I direct your attention to the closing sentence. I'm not the only one thinking implantable reality-augmentation.
The only difference is that they are still thinking "fiche'n'chips" with hardware manufactured and then surgically implanted - a technique that will not work unless you trust a robot, with its ability to mechanically make hundreds of thousands of connections, to go poking around in your brain and "plumbing in" the hardware. Also, you'll be carrying discrete lumps of hardware inside your head, some of it may weigh a gramme or more. That's the kind of thing that would turn into a lethal, brain-jellifying, missile in case of an accident at high speed, or in the worst scenario, if you walked head-on into a glass door. Do you really want a few dozen quite massive bullets inside your body?
On the other hand, I'm thinking nanotechnology, a whole series of self-organising solutions that you inject and which arrange themselves along neurons and synapses, forming in effect a spidery scaffold that is distributed all through your body and weighs almost nothing compared to the existing nerve cells.
This technology would be less invasive, and has a few curious side effects. In effect, your nervous system and brain become very hardy, as most "natural" deaths are due to neurological failures, nerves stop conducting, synapses misfire more and more. Because you have this scaffold in place, neurological failure becomes much less of a problem. You get to live longer.
The second side effect is something that's not immediately obvious. But the speed of conduction is several orders of magnitude different between nerve cells (very slow) and metal or some metallic nanoparticle. In effect, your reflexes and motor skills will improve dramatically. The only problem would be a feeling of disorientation until your brain learned to ignore the neural signals that would arrive a full couple of milliseconds AFTER the new NNN's signal had already arrived and been acted on.
So here's a technology I've envisioned for almost a decade now, and which is becoming more and more possible by each day's research being done around the world. I won't even hesitate to say that several military projects are already at the "injecting NNN goo" stage, and would not be surprised to hear that at least some of the test had been successful.
Now I no longer wonder how long before they make an NNN, I wonder how long before someone figures out how to download one's NNN to a backup device...
Hang on, you're saying, these guys have produced a projector and smart stuff to overlay images on your surroundings. That's not the same as your NanoNeuroNet[(c)2000-2020 teddlesruss] technology.
I direct your attention to the closing sentence. I'm not the only one thinking implantable reality-augmentation.
The only difference is that they are still thinking "fiche'n'chips" with hardware manufactured and then surgically implanted - a technique that will not work unless you trust a robot, with its ability to mechanically make hundreds of thousands of connections, to go poking around in your brain and "plumbing in" the hardware. Also, you'll be carrying discrete lumps of hardware inside your head, some of it may weigh a gramme or more. That's the kind of thing that would turn into a lethal, brain-jellifying, missile in case of an accident at high speed, or in the worst scenario, if you walked head-on into a glass door. Do you really want a few dozen quite massive bullets inside your body?
On the other hand, I'm thinking nanotechnology, a whole series of self-organising solutions that you inject and which arrange themselves along neurons and synapses, forming in effect a spidery scaffold that is distributed all through your body and weighs almost nothing compared to the existing nerve cells.
This technology would be less invasive, and has a few curious side effects. In effect, your nervous system and brain become very hardy, as most "natural" deaths are due to neurological failures, nerves stop conducting, synapses misfire more and more. Because you have this scaffold in place, neurological failure becomes much less of a problem. You get to live longer.
The second side effect is something that's not immediately obvious. But the speed of conduction is several orders of magnitude different between nerve cells (very slow) and metal or some metallic nanoparticle. In effect, your reflexes and motor skills will improve dramatically. The only problem would be a feeling of disorientation until your brain learned to ignore the neural signals that would arrive a full couple of milliseconds AFTER the new NNN's signal had already arrived and been acted on.
So here's a technology I've envisioned for almost a decade now, and which is becoming more and more possible by each day's research being done around the world. I won't even hesitate to say that several military projects are already at the "injecting NNN goo" stage, and would not be surprised to hear that at least some of the test had been successful.
Now I no longer wonder how long before they make an NNN, I wonder how long before someone figures out how to download one's NNN to a backup device...
Subscribe to:
Posts (Atom)