Silver, D. et al. Mastering the sport of Go with out human data. Nature 550, 354–359 (2017).
Cox, D. D. & Dean, T. Neural networks and neuroscience-inspired laptop imaginative and prescient. Curr. Biol. 24, R921–R929 (2014).
Milakov, M. Deep Studying With GPUs. https://www.nvidia.co.uk/docs/IO/147844/Deep-Studying-With-GPUs-MaximMilakov-NVIDIA.pdf (Nvidia, 2014).
Bullmore, E. & Sporns, O. The financial system of mind community group. Nat. Rev. Neurosci. 13, 336–349 (2012).
Felleman, D. J. & Van Essen, D. C. Distributed hierarchical processing within the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Info Processing Programs Vol. 28 (eds Pereira, F. et al.) 1097–1105 (Neural Info Processing Programs Basis, 2012). This work—utilizing deep convolutional networks—was the primary to win the ImageNet problem, fuelling the following deep-learning revolution.
Deco, G., Rolls, E. T. & Romo, R. Stochastic dynamics as a precept of mind perform. Prog. Neurobiol. 88, 1–16 (2009).
Venkataramani, S., Roy, Okay. & Raghunathan, A. Environment friendly embedded studying for IoT units. In 21st Asia and South Pacific Design Automation Conf. 308–311 (IEEE, 2016).
Maass, W. Networks of spiking neurons: the third era of neural community fashions. Neural Netw. 10, 1659–1671 (1997). This paper was one of many first works to supply a rigorous mathematical evaluation of the computational energy of spiking neurons, categorizing them because the third era of neural networks (after perceptron and sigmoidal neurons).
McCulloch, W. S. & Pitts, W. A logical calculus of the concepts immanent in nervous exercise. Bull. Math. Biophys. 5, 115–133 (1943).
Nair, V. & Hinton, G. E. Rectified linear models enhance restricted Boltzmann machines. In Proc. 27th Int. Conf. on Machine Studying (eds Fürnkranz, J. & Joachims, T.) 807–814 (IMLS, 2010).
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Studying representations by back-propagating errors. Nature 323, 533–536 (1986). This seminal work proposed gradient-descent-based backpropagation as a studying technique for neural networks.
Izhikevich, E. M. Easy mannequin of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).
Hebb, D. O. The Group of Habits: A Neuropsychological Idea (Wiley, 1949).
Abbott, L. F. & Nelson, S. B. Synaptic plasticity: taming the beast. Nat. Neurosci. Three, 1178–1183 (2000).
Liu, S.-C. & Delbruck, T. Neuromorphic sensory methods. Curr. Opin. Neurobiol. 20, 288–295 (2010).
Lichtsteiner, P., Posch, C. & Delbruck, T. A. 128×128 120 db 15 μs latency asynchronous temporal distinction imaginative and prescient sensor. IEEE J. Strong-State Circuits 43, 566–576 (2008).
Vanarse, A., Osseiran, A. & Rassau, A. A evaluation of present neuromorphic approaches for imaginative and prescient, auditory, and olfactory sensors. Entrance. Neurosci. 10, 115 (2016).
Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C. & Srinivasan, M. Asynchronous frameless event-based optical movement. Neural Netw. 27, 32–37 (2012).
Wongsuphasawat, Okay. & Gotz, D. Exploring movement, elements, and outcomes of temporal occasion sequences with the outflow visualization. IEEE Trans. Vis. Comput. Graph. 18, 2659–2668 (2012).
Rogister, P., Benosman, R., Ieng, S.-H., Lichtsteiner, P. & Delbruck, T. Asynchronous event-based binocular stereo matching. IEEE Trans. Neural Netw. Be taught. Syst. 23, 347–353 (2012).
Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. A spiking neural community mannequin of 3D notion for event-based neuromorphic stereo imaginative and prescient methods. Sci. Rep. 7, 40703 (2017).
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Bettering neural networks by stopping co-adaptation of function detectors. Preprint at http://arxiv.org/abs/1207.0580 (2012).
Deng, J. et al. ImageNet: a large-scale hierarchical picture database. In IEEE Conf. on Laptop Imaginative and prescient and Sample Recognition 248–255 (IEEE, 2009).
Rullen, R. V. & Thorpe, S. J. Fee coding versus temporal order coding: what the retinal ganglion cells inform the visible cortex. Neural Comput. 13, 1255–1283 (2001).
Hu, Y., Liu, H., Pfeiffer, M. & Delbruck, T. DVS benchmark datasets for object monitoring, motion recognition, and object recognition. Entrance. Neurosci. 10, 405 (2016).
Geiger, A., Lenz, P., Stiller, C. & Urtasun, R. Imaginative and prescient meets robotics: the KITTI dataset. Int. J. Robotic. Res. 32, 1231–1237 (2013).
Barranco, F., Fermuller, C., Aloimonos, Y. & Delbruck, T. A dataset for visible navigation with neuromorphic strategies. Entrance. Neurosci. 10, 49 (2016).
Sengupta, A., Ye, Y., Wang, R., Liu, C. & Roy, Okay. Going deeper in spiking neural networks: VGG and residual architectures. Entrance. Neurosci. 13, 95 (2019). This paper was the primary to exhibit the aggressive efficiency of a conversion-based spiking neural community on ImageNet information for deep neural architectures.
Cao, Y., Chen, Y. & Khosla, D. Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113, 54–66 (2015).
Diehl, P. U. et al. Quick-classifying, high-accuracy spiking deep networks by weight and threshold balancing. In Int. Joint Conf. on Neural Networks 2933–2341 (IEEE, 2015).
Pérez-Carrasco, J. A. et al. Mapping from frame-driven to frame-free event-driven imaginative and prescient methods by low-rate price coding and coincidence processing—software to feedforward ConvNets. IEEE Trans. Sample Anal. Mach. Intell. 35, 2706–2719 (2013).
Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M. & Liu, S.-C. Conversion of continuous-valued deep networks to environment friendly event-driven networks for picture classification. Entrance. Neurosci. 11, 682 (2017).
Diehl, P. U., Zarrella, G., Cassidy, A. S., Pedroni, B. U. & Neftci, E. Conversion of synthetic recurrent neural networks to spiking neural networks for low-power neuromorphic . In Int. Conf. on Rebooting Computing 20 (IEEE, 2016).
Abadi, M. et al. Tensorflow: a system for large-scale machine studying. In 12th USENIX Symp. Working Programs Design and Implementation 265–283 (2016).
Hunsberger, E. & Eliasmith, C. Spiking deep networks with LIF neurons. Preprint at http://arxiv.org/abs/1510.08829 (2015).
Pfeiffer, M. & Pfeil, T. Deep studying with spiking neurons: alternatives and challenges. Entrance. Neurosci. 12, 774 (2018).
Ponulak, F. & Kasiński, A. Supervised studying in spiking neural networks with ReSuMe: sequence studying, classification, and spike shifting. Neural Comput. 22, 467–510 (2010).
Gütig, R. & Sompolinsky, H. The tempotron: a neuron that learns spike-timing-based choices. Nat. Neurosci. 9, 420–428 (2006).
Bohte, S. M., Kok, J. N. & La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).
Ghosh-Dastidar, S. & Adeli, H. A brand new supervised studying algorithm for a number of spiking neural networks with software in epilepsy and seizure detection. Neural Netw. 22, 1419–1431 (2009).
Anwani, N. & Rajendran, B. NormAD: normalized approximate descent-based supervised studying rule for spiking neurons. In Int. Joint Conf. on Neural Networks 2361–2368 (IEEE, 2015).
Lee, J. H., Delbruck, T. & Pfeiffer, M. Coaching deep spiking neural networks utilizing backpropagation. Entrance. Neurosci. 10, 508 (2016).
Orchard, G. et al. HFirst: a temporal strategy to object recognition. IEEE Trans. Sample Anal. Mach. Intell. 37, 2028–2040 (2015).
Mostafa, H. Supervised studying based mostly on temporal coding in spiking neural networks. IEEE Trans. Neural Netw. Be taught. Syst. 29, 3227–3235 (2018).
Panda, P. & Roy, Okay. Unsupervised regenerative studying of hierarchical options in spiking deep networks for object recognition. In Int. Joint Conf. on Neural Networks 299–306 (IEEE, 2016).
LeCun, Y., Cortes, C. & Burges, C. J. C. The MNIST Database of Handwritten Digits http://yann.lecun.com/exdb/mnist/ (1998).
Masquelier, T., Guyonneau, R. & Thorpe, S. J. Aggressive STDP-based spike sample studying. Neural Comput. 21, 1259–1276 (2009).
Diehl, P. U. & Prepare dinner, M. Unsupervised studying of digit recognition utilizing spike-timing-dependent plasticity. Entrance. Comput. Neurosci. 9, 99 (2015). It is a good introduction to implementing spiking neural networks with unsupervised STDP-based studying for real-world duties similar to digit recognition.
Kheradpisheh, S. R., Ganjtabesh, M., Thorpe, S. J. & Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 99, 56–67 (2018).
Neftci, E., Das, S., Pedroni, B., Kreutz-Delgado, Okay. & Cauwenberghs, G. Occasion-driven contrastive divergence for spiking neuromorphic methods. Entrance. Neurosci. 7, 272 (2014).
Stromatias, E., Soto, M., Serrano-Gotarredona, T. & Linares-Barranco, B. An event-driven classifier for spiking neural networks fed with artificial or dynamic imaginative and prescient sensor information. Entrance. Neurosci. 11, 350 (2017).
Lee, C., Panda, P., Srinivasan, G. & Roy, Okay. Coaching deep spiking convolutional neural networks with STDP-based unsupervised pre-training adopted by supervised fine-tuning. Entrance. Neurosci. 12, 435 (2018).
Mostafa, H., Ramesh, V. & Cauwenberghs, G. Deep supervised studying utilizing native errors. Entrance. Neurosci. 12, 608 (2018).
Neftci, E. O., Augustine, C., Paul, S. & Detorakis, G. Occasion-driven random back-propagation: enabling neuromorphic deep studying machines. Entrance. Neurosci. 11, 324 (2017).
Srinivasan, G., Sengupta, A. & Roy, Okay. Magnetic tunnel junction based mostly long-term short-term stochastic synapse for a spiking neural community with on-chip STDP studying. Sci. Rep. 6, 29545 (2016).
Tavanaei, A., Masquelier, T. & Maida, A. S. Acquisition of visible options by probabilistic spike-timing-dependent plasticity. In Int. Joint Conf. on Neural Networks 307–314 (IEEE, 2016).
Bagheri, A., Simeone, O. & Rajendran, B. Coaching probabilistic spiking neural networks with first-to-spike decoding. In Int. Conf. on Acoustics, Speech and Sign Processing 2986–2990 (IEEE, 2018).
Rastegari, M., Ordonez, V., Redmon, J. & Farhadi, A. XNOR-Web: ImageNet classification utilizing binary convolutional neural networks. In Eur. Conf. on Laptop Imaginative and prescient 525–542 (Springer, 2016).
Courbariaux, M., Bengio, Y. & David, J.-P. BinaryConnect: coaching deep neural networks with binary weights throughout propagations. In Advances in Neural Info Processing Programs Vol. 28 (eds Cortes, C. et al) 3123–3131 (Neural Info Processing Programs Basis, 2015).
Stromatias, E. et al. Robustness of spiking deep perception networks to noise and lowered bit precision of neuro-inspired platforms. Entrance. Neurosci. 9, 222 (2015).
Florian, R. V. Reinforcement studying by modulation of spike-timing-dependent synaptic plasticity. Neural Comput. 19, 1468–1502 (2007).
Vasilaki, E., Frémaux, N., Urbanczik, R., Senn, W. & Gerstner, W. Spike-based reinforcement studying in steady state and motion area: when coverage gradient strategies fail. PLOS Comput. Biol. 5, e1000586 (2009).
Zuo, F. et al. Habituation-based synaptic plasticity and organismic studying in a quantum perovskite. Nat. Commun. eight, 240 (2017).
Masquelier, T. & Thorpe, S. J. Unsupervised studying of visible options by spike-timing-dependent plasticity. PLOS Comput. Biol. Three, e31 (2007).
Rao, R. P. & Sejnowski, T. J. Spike-timing-dependent Hebbian plasticity as temporal distinction studying. Neural Comput. 13, 2221–2237 (2001).
Roy, S. & Basu, A. A web-based unsupervised structural plasticity algorithm for spiking neural networks. IEEE Trans. Neural Netw. Be taught. Syst. 28, 900–910 (2017).
Maass, W. Liquid state machines: motivation, concept, and purposes. In Computability in Context: Computation and Logic within the Actual World (eds Cooper, S. B. & Sorbi, A.) 275–296 (Imperial School Press, 2011).
Schrauwen, B., D’Haene, M., Verstraeten, D. & Van Campenhout, J. Compact liquid state machines on FPGA for real-time speech recognition. Neural Netw. 21, 511–523 (2008).
Verstraeten, D., Schrauwen, B., Stroobandt, D. & Van Campenhout, J. Remoted phrase recognition with the liquid state machine: a case examine. Inf. Course of. Lett. 95, 521–528 (2005).
Panda, P. & Roy, Okay. Studying to generate sequences with mixture of Hebbian and non-Hebbian plasticity in recurrent spiking neural networks. Entrance. Neurosci. 11, 693 (2017).
Maher, M. A. C., Deweerth, S. P., Mahowald, M. A. & Mead, C. A. Implementing neural architectures utilizing analog VLSI circuits. IEEE Trans. Circ. Syst. 36, 643–652 (1989).
Mead, C. Neuromorphic digital methods. Proc. IEEE 78, 1629–1636 (1990). This seminal work established neuromorphic digital methods as a brand new paradigm in computing and highlights Mead’s imaginative and prescient of going past the exact and properly outlined nature of digital computing in the direction of brain-like facets.
Mead, C. A. Neural for imaginative and prescient. Eng. Sci. 50, 2–7 (1987).
NVIDIA Launches the World’s First Graphics Processing Unit GeForce 256.
https://www.nvidia.com/object/IO_20020111_5424.html (Nvidia, 1999).
Nageswaran, J. M., Dutt, N., Krichmar, J. L., Nicolau, A. & Veidenbaum, A. V. A configurable simulation atmosphere for the environment friendly simulation of large-scale spiking neural networks on graphics processors. Neural Netw. 22, 791–800 (2009).
Fidjeland, A. Okay. & Shanahan, M. P. Accelerated simulation of spiking neural networks utilizing GPUs. In Int. Joint. Conf. on Neural Networks 3041–3048 (IEEE, 2010).
Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip studying. IEEE Micro 38, 82–99 (2018).
Blouw, P., Choo, X., Hunsberger, E. & Eliasmith, C. Benchmarking key phrase recognizing effectivity on neuromorphic . In Proc. seventh Annu. Neuro-inspired Computational Components Workshop 1 (ACM, 2018).
Hsu, J. How IBM obtained brainlike effectivity from the TrueNorth chip. IEEE Spectrum https://spectrum.ieee.org/computing//how-ibm-got-brainlike-efficiency-from-the-truenorth-chip (29 September 2014).
Khan, M. M. et al. SpiNNaker: mapping neural networks onto a massively parallel chip multiprocessor. In Int. Joint Conf. on Neural Networks 2849–2856 (IEEE, 2008). This was one of many first works to implement a large-scale spiking neural community on utilizing event-driven computations and industrial processors.
Benjamin, B. V. et al. Neurogrid: a mixed-analog–digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716 (2014).
Schemmel, J. et al. A wafer-scale neuromorphic system for large-scale neural modeling. In Int. Symp. Circuits and Programs 1947–1950 (IEEE, 2010).
Merolla, P. A. et al. One million spiking-neuron built-in circuit with a scalable communication community and interface. Science 345, 668–673 (2014). This work describes TrueNorth, the primary digital custom-designed, large-scale neuromorphic processor, an consequence of the DARPA SyNAPSE programme; it was geared in the direction of fixing industrial purposes by a digital neuromorphic implementation.
Furber, S. Giant-scale neuromorphic computing methods. J. Neural Eng. 13, 051001 (2016).
Qiao, N. et al. A reconfigurable on-line studying spiking neuromorphic processor comprising 256 neurons and 128ok synapses. Entrance. Neurosci. 9, 141 (2015).
Indiveri, G. et al. Neuromorphic silicon neuron circuits. Entrance. Neurosci. 5, 73 (2011).
Search engine optimization, J.-s. et al. A 45 nm CMOS neuromorphic chip with a scalable structure for studying in networks of spiking neurons. In Customized Built-in Circuits Conf. 311–334 (IEEE, 2011).
Boahen, Okay. A. Level-to-point connectivity between neuromorphic chips utilizing deal with occasions. IEEE Trans. Circuits Syst. II 47, 416–434 (2000). This paper describes the basics of deal with occasion illustration and its software to neuromorphic methods.
Serrano-Gotarredona, R. et al. AER constructing blocks for multi-layer multi-chip neuromorphic imaginative and prescient methods. In Advances in Neural Info Processing Programs Vol. 18 (eds Weiss, Y., Schölkopf, B. & Platt, J. C.) 1217–1224 (Neural Info Processing Programs Basis, 2006).
Moore, G. E. Cramming extra parts onto built-in circuits. Proc. IEEE 86, 82–85 (1998).
Waldrop, M. M. The chips are down for Moore’s legislation. Nature 530, 144 (2016).
von Neumann, J. First draft of a report on the EDVAC. IEEE Ann. Hist. Comput. 15, 27–75 (1993).
Mahapatra, N. R. & Venkatrao, B. The processor–reminiscence bottleneck: issues and options. Crossroads 5, 2 (1999).
Gokhale, M., Holmes, B. & Iobst, Okay. Processing in reminiscence: the Terasys massively parallel PIM array. Laptop 28, 23–31 (1995).
Elliott, D., Stumm, M., Snelgrove, W. M., Cojocaru, C. & McKenzie, R. Computational RAM: implementing processors in reminiscence. IEEE Des. Take a look at Comput. 16, 32–41 (1999).
Ankit, A., Sengupta, A., Panda, P. & Roy, Okay. RESPARC: a reconfigurable and energy-efficient structure with memristive crossbars for deep spiking neural networks. In Proc. 54th ACM/EDAC/IEEE Annual Design Automation Conf. 63.2 (IEEE, 2017).
Bez, R. & Pirovano, A. Non-volatile reminiscence applied sciences: rising ideas and new supplies. Mater. Sci. Semicond. Course of. 7, 349–355 (2004).
Xue, C. J. et al. Rising non-volatile reminiscences: alternatives and challenges. In Proc. ninth Int. Conf. on /Software program Codesign and System Synthesis 325–334 (IEEE, 2011).
Wong, H.-S. P. & Salahuddin, S. Reminiscence leads the best way to raised computing. Nat. Nanotechnol. 10, 191 (2015); correction 10, 660 (2015).
Chi, P. et al. Prime: a novel processing-in-memory structure for neural community computation in ReRAM-based major reminiscence. In Proc. 43rd Int. Symp. Laptop Structure 27–39 (IEEE, 2016).
Shafiee, A. et al. ISAAC: a convolutional neural community accelerator with in-situ analog arithmetic in crossbars. In Proc. 43rd Int. Symp. Laptop Structure 14–26 (IEEE, 2016).
Burr, G. W. et al. Neuromorphic computing utilizing non-volatile reminiscence. Adv. Phys. X 2, 89–124 (2017).
Snider, G. S. Spike-timing-dependent studying in memristive nanodevices. In Proc. Int. Symp. on Nanoscale Architectures 85–92 (IEEE, 2008).
Chua, L. Memristor—the lacking circuit component. IEEE Trans. Circuit Idea 18, 507–519 (1971). This was the primary work to conceptualize memristors as elementary passive circuit components; they’re at present being investigated as high-density storage units by varied rising applied sciences for typical general-purpose and neuromorphic computing architectures.
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The lacking memristor discovered. Nature 453, 80–83 (2008).
Waser, R., Dittmann, R., Staikov, G. & Szot, Okay. Redox-based resistive switching reminiscences—nanoionic mechanisms, prospects, and challenges. Adv. Mater. 21, 2632–2663 (2009).
Burr, G. W. et al. Current progress in phase-change reminiscence know-how. IEEE J. Em. Sel. Prime. Circuits Syst. 6, 146–162 (2016).
Hosomi, M. et al. A novel nonvolatile reminiscence with spin torque switch magnetization switching: spin-RAM. In Int. Electron Units Assembly 459–462 (IEEE, 2005).
Ambrogio, S. et al. Statistical fluctuations in HfOx resistive-switching reminiscence. Half I—set/reset variability. IEEE Trans. Electron Dev. 61, 2912–2919 (2014).
Fantini, A. et al. Intrinsic switching variability in HfO2 RRAM. In fifth Int. Reminiscence Workshop 30–33 (IEEE, 2013).
Merrikh-Bayat, F. et al. Excessive-performance mixed-signal neurocomputing with nanoscale floating-gate reminiscence cell arrays. IEEE Trans. Neural Netw. Be taught. Syst. 29, 4782–4790 (2017).
Ramakrishnan, S., Hasler, P. E. & Gordon, C. Floating-gate synapses with spike-time-dependent plasticity. IEEE Trans. Biomed. Circuits Syst. 5, 244–252 (2011).
Hasler, J. & Marr, H. B. Discovering a roadmap to attain giant neuromorphic methods. Entrance. Neurosci. 7, 118 (2013).
Hasler, P. E., Diorio, C., Minch, B. A. & Mead, C. Single transistor studying synapses. In Advances in Neural Info Processing Programs Vol. 7 (eds Tesauro, G., Touretzky, D. S. & Leen, T. Okay.) 817–824 (Neural Info Processing Programs Basis, 1995). This was one of many first works to make use of a non-volatile reminiscence system—particularly, a floating-gate transistor—as a synaptic component.
Holler, M., Tam, S., Castro, H. & Benson, R. An electrically trainable synthetic neural community (ETANN) with 10240 ‘floating gate’ synapses. In Int. Joint Conf. on Neural Networks Vol. 2, 191–196 (1989).
Chen, P.-Y. et al. Know-how–design co-optimization of resistive cross-point array for accelerating studying algorithms on chip. In Proc. Eur. Conf. on Design, Automation & Testing 854–859 (IEEE, 2015).
Chakraborty, I., Roy, D. & Roy, Okay. Know-how conscious coaching in memristive neuromorphic methods for nonideal synaptic crossbars. IEEE Trans. Em. Prime. Comput. Intell. 2, 335–344 (2018).
Alibart, F., Gao, L., Hoskins, B. D. & Strukov, D. B. Excessive precision tuning of state for memristive units by adaptable variation-tolerant algorithm. Nanotechnology 23, 075201 (2012).
Dong, Q. et al. A four + 2T SRAM for looking out and in-memory computing with zero.Three-V V
DDmin. IEEE J. Strong-State Circuits 53, 1006–1015 (2018).
Agrawal, A., Jaiswal, A., Lee, C. & Roy, Okay. X-SRAM: enabling in-memory Boolean computations in CMOS static random-access reminiscences. IEEE Trans. Circuits Syst. I 65, 4219–4232 (2018).
Eckert, C. et al. Neural cache: bit-serial in-cache acceleration of deep neural networks. In Proc. 45th Ann. Int. Symp. Laptop Structure 383–396 (IEEE, 2018).
Gonugondla, S. Okay., Kang, M. & Shanbhag, N. R. A variation-tolerant in-memory machine-learning classifier through on-chip coaching. IEEE J. Strong-State Circuits 53, 3163–3173 (2018).
Biswas, A. & Chandrakasan, A. P. Conv-RAM: an energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine studying purposes. In Int. Strong-State Circuits Conf. 488–490 (IEEE, 2018).
Kang, M., Keel, M.-S., Shanbhag, N. R., Eilert, S. & Curewitz, Okay. An energy-efficient VLSI structure for sample recognition through deep embedding of computation in SRAM. In Int. Conf. on Acoustics, Speech and Sign Processing 8326–8330 (IEEE, 2014).
Seshadri, V. et al. RowClone: quick and energy-efficient in-DRAM bulk information copy and initialization. In Proc. 46th Ann. IEEE/ACM Int. Symp. Microarchitecture 185–197 (ACM, 2013).
Prezioso, M. et al. Coaching and operation of an built-in neuromorphic community based mostly on metal-oxide memristors. Nature 521, 61–64 (2015).
Sebastian, A. et al. Temporal correlation detection utilizing computational phase-change reminiscence. Nat. Commun. eight, 1115 (2017).
Jain, S., Ranjan, A., Roy, Okay. & Raghunathan, A. Computing in reminiscence with spin-transfer torque magnetic RAM. IEEE Trans. Very Giant Scale Integr. (VLSI) Syst. 26, 470–483 (2018).
Jabri, M. & Flower, B. Weight perturbation: an optimum structure and studying approach for analog VLSI feedforward and recurrent multilayer networks. IEEE Trans. Neural Netw. Three, 154–157 (1992).
Diorio, C., Hasler, P., Minch, B. A. & Mead, C. A. A floating-gate MOS studying array with regionally computed weight updates. IEEE Trans. Electron Dev. 44, 2281–2289 (1997).
Bayat, F. M., Prezioso, M., Chakrabarti, B., Kataeva, I. & Strukov, D. Memristor-based perceptron classifier: growing complexity and dealing with imperfect . In Proc. 36th Int. Conf. on Laptop-Aided Design 549–554 (IEEE, 2017).
Guo, X. et al. Quick, energy-efficient, sturdy, and reproducible mixed-signal neuromorphic classifier based mostly on embedded NOR flash reminiscence know-how. In Int. Electron Units Assembly 6.5 (IEEE, 2017).
Liu, C., Hu, M., Strachan, J. P. & Li, H. Rescuing memristor-based neuromorphic design with excessive defects. In Proc. 54th ACM/EDAC/IEEE Design Automation Conf. 76.6 (IEEE, 2017).
Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016).
Fukushima, A. et al. Spin cube: a scalable actually random quantity generator based mostly on spintronics. Appl. Phys. Categorical 7, 083001 (2014).
Le Gallo, M. et al. Combined-precision in-memory computing. Nature Electron. 1, 246 (2018).
Krstic, M., Grass, E., Gürkaynak, F. Okay. & Vivet, P. Globally asynchronous, regionally synchronous circuits: overview and outlook. IEEE Des. Take a look at Comput. 24, 430–441 (2007).
Choi, H. et al. An electrically modifiable synapse array of resistive switching reminiscence. Nanotechnology 20, 345201 (2009).
Serrano-Gotarredona, T., Masquelier, T., Prodromakis, T., Indiveri, G. & Linares-Barranco, B. STDP and STDP variations with memristors for spiking neuromorphic studying methods. Entrance. Neurosci. 7, 2 (2013).
Kuzum, D., Jeyasingh, R. G., Lee, B. & Wong, H.-S. P. Nanoelectronic programmable synapses based mostly on part change supplies for brain-inspired computing. Nano Lett. 12, 2179–2186 (2012).
Krzysteczko, P., Münchenberger, J., Schäfers, M., Reiss, G. & Thomas, A. The memristive magnetic tunnel junction as a nanoscopic synapse–neuron system. Adv. Mater. 24, 762–766 (2012).
Vincent, A. F. et al. Spin-transfer torque magnetic reminiscence as a stochastic memristive synapse for neuromorphic methods. IEEE Trans. Biomed. Circuits Syst. 9, 166–174 (2015).
Sengupta, A. & Roy, Okay. Encoding neural and synaptic functionalities in electron spin: a pathway to environment friendly neuromorphic computing. Appl. Phys. Rev. four, 041105 (2017).
Borghetti, J. et al. ‘Memristive’ switches allow ‘stateful’ logic operations through materials implication. Nature 464, 873–876 (2010).
Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to speed up matrix-vector multiplication. InProc. 53rd ACM/EDAC/IEEE Annual Design Automation Conf.21.1 (IEEE, 2016).
Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).
Wright, C. D., Liu, Y., Kohary, Okay. I., Aziz, M. M. & Hicken, R. J. Arithmetic and biologically-inspired computing utilizing phase-change supplies. Adv. Mater. 23, 3408–3413 (2011).
Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing restoration utilizing computational reminiscence. In Int. Electron Units Assembly 28.Three.1 (IEEE, 2017).
Rosenblatt, F. The perceptron: a probabilistic mannequin for data storage and group within the mind. Psychol. Rev. 65 386 (1958).
Bi, G. Q. & Poo, M. M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic energy, and postsynaptic cell sort. J. Neurosci. 18, 10464–10472 (1998).