TopicsArtificial Intelligence; Machine Learning; Signal Processing
Dr Varun Ojha is a researcher in artificial intelligence who primarily works on neural networks and data science. Dr Ojha is currently a lecturer in Computer Science at the University of Reading, UK. Previously as a Postdoctoral Fellow at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, Dr Ojha led a team of researchers to investigate the human perception of dynamic urban environments using machine learning. Before this, Dr Ojha was a Marie-Curie Fellow at the Technical University of Ostrava, Czech Republic. Dr Ojha received a PhD degree in Computer Science from the Technical University of Ostrava, the Czech Republic. Earlier, Dr Ojha received a research fellowship position funded by the Govt of India’s Dept of Science and Technology at the Visvabharati University, India, to develop an intelligent mixed toxic gases pattern recognition system. Dr Ojha has 60+ research publications in international peer-reviewed journals and conferences.
Simpler a model is better is its generalization. This research work presents a class of neural inspired algorithms that are highly sparse in their architectural construction but perform highly accurately. In addition, they make a simultaneous function approximation and feature selection when solving machine learning tasks: classification, regression, and pattern recognition. This class of algorithms are Neural Tee Algorithms: Heterogeneous Neural Tree, Multi-Output Neural Tree, and Backporgation Neural Tree. This research found that any such arbitrarily constructed neural tree, which is like an arbitrarily “thinned” neural network, has the potential to solve machine learning tasks with an equivalent or better degree of accuracy than a fully connected symmetric and systematic neural networks architecture. The algorithm takes random repeated inputs through its leaves and imposes dendritic nonlinearities through its internal connections like a biological dendritic tree would do. The algorithm produces an ad hoc neural tree which is trained using a stochastic gradient descent optimizer. The algorithms produce high-performing and parsimonious models balancing the complexity with descriptive ability on a wide variety of machine learning problems
- Ojha, V., & Nicosia, G. (2022). Backpropagation neural tree. Neural Networks, 149, 66-83: https://arxiv.org/pdf/2202.02248.pdf
- Ojha, V., & Nicosia, G. (2020). Multiobjective optimization of multi-output neural trees. In 2020 IEEE Congress on Evolutionary Computation (CEC) (pp. 1-8). IEEE Press: https://arxiv.org/pdf/2010.04524.pdf
- Ojha, V. K., Abraham, A., & Snášel, V. (2017). Ensemble of heterogeneous flexible neural trees using multiobjective genetic programming. Applied Soft Computing, 52, 909-924: https://arxiv.org/pdf/1705.05592.pdf
Simone Scardapane is Assistant Professor at Sapienza University of Rome, working in several subfields of deep learning, including graph neural networks and continual learning. He has a strong interest in promoting machine learning in Italy, having contributed to several no-profit activities, including Meetups and podcasts. He is action editor of Neural Networks and Cognitive Computation, member of the IEEE CIS Social Media Sub-Committee, the IEEE Task Force on Reservoir Computing, the “Machine learning in geodesy” joint study group of the International Association of Geodesy, and chair of the Statistical Pattern Recognition Techniques TC of the International Association for Pattern Recognition.
Automatic differentiation (autodiff) is at the hearth of the deep learning “magic”, and it is also powering advances in multiple fields ranging from visual rendering to quantum chemistry. In the first part of this practical tutorial, we show some fundamental ideas from the autodiff field, and how they are implemented in several common frameworks, including TensorFlow, PyTorch, and JAX. In the second part, we show instead how to implement a custom PyTorch-like autodiff library from scratch. We conclude with some trends and advanced tools from the autodiff world, e.g., stateless models in PyTorch.