Vanilla 1.1.9 is a product of Lussumo. More Information: Documentation, Community Support.

  1.  
    So THAT's how they work..
    https://www.youtube.com/watch?v=7KCWcx-YIRI

    (Ging gang gooli, gooli, gooli, gooli, watcha, Ging gang goo, Ging gang goo)
  2.  
    Ganguli's references to the edge of chaos for weight initialisations reminds me of the first radios I built, which were super-regenerative and could be made to howl when feedback was excessive. But on the edge the gain was stellar. I had Alpha Centauri pop coming in five by nine.
  3.  
    Ganguli is def a rising star. For example, Hinton goes for dropout, and Ganguli trashes it
    https://arxiv.org/abs/1611.01232
    Similarly, Hinton goes for ReLU and Ganguli trashes that too :)
    https://arxiv.org/abs/1711.04735

    It's all fun and games with deep learning these days.
    •  
      CommentAuthorAngus
    • CommentTimeNov 30th 2017
     
    The importance of the edge of chaos in these kinds of networks was described by Stuart Kaufman's (1994) book i mentioned over in the reading thread.
  4.  
    http://www.telegraph.co.uk/science/2017/12/06/entire-human-chess-knowledge-learned-surpassed-deepminds-alphazero/
    https://arxiv.org/abs/1712.01815

    “It will no doubt revolutionise the game, but think about how this could be applied outside chess. This algorithm could run cities, continents, universes.”
    •  
      CommentAuthorAngus
    • CommentTime4 days ago
     
    You are telling me that AIs can conquer the world using the knowledge they acquire playing with themselves? Wankers!!
  5.  
    I wonder what the AI-written version of MCO (Modern Chess Openings) would look like. Are there new lines that human players haven't found, or do AI chess programs still use MCO for the first 10 moves or so?
  6.  
    " All the brilliant stratagems and refinements that human programmers used to build chess engines have been outdone, and like Go players we can only marvel at a wholly new approach to the game."

    https://chess24.com/en/read/news/deepmind-s-alphazero-crushes-chess

    Your question is mine also. Looking into it...
  7.  
    Table 2 et seq from the paper
  8.  
    http://videolectures.net/deeplearning2016_ganguli_theoretical_neuroscience/

    Of course I like that Ganguli is intelligent, but what I like more is that he is able to express himself very clearly.
    Quite often we find that the one does not accompany the other.
  9.  
    Those interested may have noticed that this is basically the same Ganguli lecture I originally posted. However, not only is it neatly visually partitioned into lecturer and slides, but also that more detail is explicated for each subtopic. The number and richness of the subtopics is what makes it a seminal lecture for me. I've been reading neural network papers since the eighties (most of which seems to have flown in one ear and out the other) and I can only think of one other that made such an impression (the PDP books which explained i.a. backpropagation).

    It seems as if it took the capture of a competent physicist by neuroscience to make these advances. This shows up in his approach to research, and how it differs from the far more intuitive approach of Hinton, which in comparison paints Hinton as something of an artist. Or "bastler", even!

    All this gets me that much closer to answering the age-old question I've had about neural networks. Once trained, the entire network can be described by an input-output correlation matrix (or equivalently its eigenvectors and eigenvalues), and this relates, for unsupervised learning, to the input autocorrelation and the clustering statistics (say K-means). Why then, when learning dynamics is fully understood, should one spend time training anything? Just instantiate the eigensolutions in a few bytes of memory! One can see how a lump of cellular matter called a brain might need to go the long way round, but we have access to models and mathematics!
    •  
      CommentAuthorAngus
    • CommentTime4 days ago
     
    Seemed that way to me too. But perhaps that is what makes a learning matrix different from a brain, or anyway a conscious brain. It also seems to me counterintuitive that you could just print into a brain all the experiences of a life, in one parallel operation. And not just because it would be vastly complex.

    Crypto-essentialism, I suppose.
  10.  
    Certainly something is imprinted into the gubbins comprising the embryo. For example, the tweets of newly-hatched chicks with opened beaks soliciting food.
    •  
      CommentAuthorpcstru
    • CommentTime4 days ago
     
    Posted By: Andrew PalfreymanCertainly something is imprinted into the gubbins comprising the embryo. For example, the tweets of newly-hatched chicks with opened beaks soliciting food.

    Imprinted perhaps; but the trial and error training is effectively being done by evolution.
    •  
      CommentAuthorAngus
    • CommentTime4 days ago
     
    But that's exactly the process AP was mentioning. The "weights" are stamped in all at once - not accumulated as the matrix gains experience.
  11.  
    There's a sliding scale here. A chick is also born with a beating heart and breathes air as soon as it leaves the egg. These are "autonomous functions", but functions nonetheless, albeit at a low level on a scale which goes up to the sophistication of opening the beak for food and tweeting. This all is completely hardwired. So at what level of sophistication does hardwiring cease? How about the skills of "muscle memory"?

    These things are imprintable, so it does not seem much of a stretch to have "daily survival skills" imprintable. In insects, perhaps they all are.
    •  
      CommentAuthorpcstru
    • CommentTime4 days ago
     
    Posted By: AngusBut that's exactly the process AP was mentioning. The "weights" are stamped in all at once - not accumulated as the matrix gains experience.

    I'm confused. Evolution does not happen all at once.
    •  
      CommentAuthorAngus
    • CommentTime4 days ago edited
     
    Evolution is not the neural network that is coming out ready programmed every time an egg hatches.. Evolution is providing the values used to programme it.
    •  
      CommentAuthorpcstru
    • CommentTime4 days ago
     
    Posted By: AngusEvolution is not the neural network that is coming out ready programmed every time an egg hatches.. Evolution is providing the values used to programme it.

    Evolution is effectively training the network. Unsuccessful programs do not survive.
    •  
      CommentAuthoraber0der
    • CommentTime4 days ago
     
    The end of chess.

    I thought they would take our jobs. Instead the AIs are playing games and driving expensive automobiles.