Vanilla 1.1.9 is a product of Lussumo. More Information: Documentation, Community Support.

    • CommentAuthorbc
    • CommentTimeMar 12th 2010 edited
     
    I watched the video and my gut tells me this is a genuine outfit. The technology looks genuine too, but details are pretty scant. I guess the stuff about "unlimited" is hype. Is the video rendered in real time? They don't mention that.

    However, whether it will be a commercial success is another question, they may have a clever algorithm which does most of what current algorithms currently do, but what about the rest of the "cool" features that are expected nowadays but they don't support? There is that old 80/20 rule. Are storage requirements an Achilles heel for example? I wonder how they will protect their IP as well.

    They use Google as an example, but Google have masses of computer power, which they are claiming they don't need... There also the problem of breaking into a market that is heavily invested in polygon tech. Games publishers are often closely tied to the graphics card guys.

    ETA: Maybe these guys will be more like the "DNF" development than Steorn.
    • CommentAuthorUtD_Grant
    • CommentTimeMar 12th 2010
     
    Posted By: bcThey use Google as an example, but Google have masses of computer power, which they are claiming they don't need...


    Like a D-cell ?
    • CommentAuthorunderunity
    • CommentTimeMar 12th 2010
     
    Google computers are likely optimized to shuttle gigabytes of data around on a whim.

    A home computer, on the other hand, is not. Computing power is loads higher than bandwidth, and engineers have gone through an obscene amount of hardship to work around RAM/HD seek times.

    This "technology" is evolution in the WRONG direction. Processing shit isn't our problem, storing and transferring it is. I can tell this guy is talking rubbish because he mentions how "doing math" and "calculations" slows down graphics rendering. It really doesnt (relatively). Fetching data does. AMD suggests 6:1 ratio between arithmetic/fetch instructions. Thats why GPU tessellation is the right direction, and trying to sort through a kajillion points is, ironically, pointless.
  1.  
    I finally got a look at the demo. I thought it was a pretty good looking result, although I was dying to see a few straight edges and corridors with crates in them brought into the mix by the end.

    @uu

    I remember the thrill of a few Mb of texture memory on a 3Dfx card and the satisfying "clunk" the monitor made as you realised your 3D 1st Person shooter was actually going to use it properly.

    I thought most of the issues transferring vertex and texture data had been worked through? Wasn't that one of Carmack's big deals a few years ago, a transparent caching scheme for these things with load on demand?

    It seems just as premature to write off the approach as it does to say it'll replace polygons. I'd certainly like to see more.
    • CommentAuthorbc
    • CommentTimeMar 12th 2010
     
    Posted By: underunityGoogle computers are likely optimized to shuttle gigabytes of data around on a whim.
    Actually Google use standard hardware, although they put them in custom racks.. and maybe have 450,000 of them. Google have managed the trick of using cheap hardware in massively parallel operation.

    I worked on a custom SMP system once, and it was not very powerful and certainly not cheap! It was codenamed "Goldrush" which perhaps indicates the level of marketing input that went into it. I wonder if they ever sold any.
  2.  
    "God willing" Ha ha ha!....
    Nearly killed him to say it! (must be the accent..??)
    "But you have to say it! It's a proven target audience drop-hook!"
    Heh..