This summit invited many professors, including Yoshua Bengio, Bishop, Lenore etc. Ageda is here.
Babysitting AI and computational neuroscience impressed me a lot. # computing neuroscience: robots with feelings.
Prof. Lenore Blue's keynote is about computational neuroscience. They try to let robots feel pain, and to simulate the long and short term encoding happening in our brains.
What she talked reminds me of something I had read before.
According to the book Psychology written by Daniel Schacter, our brains do encode information into long and short codes, and the short form is possibly trasformed into a long one. Even though we don't mean to encode or memorize something sometimes, encoding still occurs unconsciously.
Reviewing is one useful way for recalling these information. Moreover, if you are in the similar environment in which you encoded the codes before, you will have higher possibility to recall it. However, the encoding error happend during reviewing is more, and that's why a detective should try to get full information while inquiring evidences from witnesses at the 1st time.
Also, our brains are more sensible to pictorial information compared with text information. So, if you try to make each thing you want to keep code as a photo, you can boost the capability of memorizing.
Yoshu Bengio: Beyond i.i.d. and babysitting AI
One of the basic assumption that makes generation possible is independent identically distributed assumption. However, influenced by observing equipments, imbalanced samples and others, the distributions of training data and test data are not always the same. Hence, Prof. Bengio's team proposed that all data are sampled from the same system rather than same distribution. As for weather of two different seasons, data desciping them are sampled from the same atmospheric circulation system, but they don't distribute identically. Bengio said they tend to initialize this system with diversed initial conditions, and the result of this distribution will be taken as what the data set follows. It makes sense.
One thing that troubles me is, how can I model the system and figure out the initial conditions? For things with obvious physical rules, it is easy, and even the model's codes are open-acssessed online. What if the one I don't know? How to make this idea works in common situations?
CNN is renowned as its power in representation learning, which encodes a variety of information into vectors. Our brains also work like this. Nontheless, what they learn are supervised, and the utility of binary network, the simplification of network structure all demostrate that the captured information are redundant. The model learned is fragile, too. After adding some noises into an image, even though the image dosen't change visually for us humans, model can not tell what it is as before. It all comes from the uncontrolling unsupervision. Babysitting AI aims at modeling with environmental information and other information, so that leading AI models.
Andrew C. Yao: The Advent of Quantum Computing
The quantum computing will offer exponential speedup for crypto-code breaking, simulation of quantum physical systems, simulation of materials, chemistry, and biology, nonlinear optimization, ML and AI. It will break through the bottleneck of computing.
Its implementation is like crystallography. In terms of crystallography, you take an X-ray photo for a crystal and then compute its structure. For quantum computing, instead of taking a real photo, you just need to collect a polynomial number of sample points. By wave-particle duality, this single photo can recreate the raw image probabilistically.
According to Andrew, dimond qubits are in the highest possibility to be used in our laptops.