This site has been permanently archived. This is a static copy provided by the University of Southampton.
The University of Southampton
Telephone:
+442380597678
Email:
jsh2@ecs.soton.ac.uk

Dr Jonathon Hare 

Personal homepage
http://github.com/jonhare
http://twitter.com/jon_hare
http://arxiv.org/a/hare_j_4.html
https://scholar.google.co.uk/citations?user=UFeON5oAAAAJ
  • Associate Professor of Computer Science
  • Director of Programmes (Computer Science)

I am an Associate Professor in the School of Electronics & Computer Science. I hold a BEng degree in Aerospace Engineering and PhD in Computer Science, both from the University of Southampton. My main research interests are centred around learnt representations of data. This is a subtopic of machine learning in which machines learn to encode or embed raw data into representations (aka latent spaces or embeddings) that attempt to capture human notions of meaning and semantics, and disentangle the underlying factors that generated the data.

My research necessarily incorporates research into deep neural network models ("deep learning"), as well as the more general notion of differentiable programming. Much of my research focusses on representations of visual information, and hence crosses over into the field of computer vision. I have however also worked on representations of textual information and other data modalities, and I am particularly interested in representations at the convergence of different modalities of data. At the same time, I am also highly interested in how we can take inspiration from biological systems in the design of our models, and both use this to inform new model architectures (that for example have particular performance characteristics, or map particularly well to certain hardware), as well as to understand emergent representational properties inside those models.

My research has been published in over 100 articles in top peer-reviewed journals and conferences. I am one of only a small handful of UK-based academics to have successively published papers in both NeurIPS and ICLR, the top international conferences for neural network, machine-learning and representation-learning research.

Research

Teaching

Publications

Contact

Research

Research interests

My main research interests lie in the area of representation learning. The long-term goal of my research is to innovate techniques that can allow machines to learn from and understand the information conveyed by data and use that information to fulfil the information needs of humans.  Broadly speaking this can be broken down into the following areas:

Novel representation

I have worked on a number of different approaches to creating novel representations from data. These include:

  • New units for representing different data types: With Yan Zhang & Adam Prügel-Bennett, I’ve worked developing differentiable neural architectures for counting and working with unordered sets. 
  • Embedding and Disentanglement: I’ve worked on a number of aspects of learning joint embeddings of different modalities of data. Recently with Matthew Painter, Adam Prügel-Bennett and I have looked at how underlying latent processes might be disentangled.
  • Learning architectures under constraints: In recent work with Sulaiman Sadiq, Geoff Merrett and I have started to look at how neural architectures for representation might be themselves learned to optimise against certain hardware constraints. Also related to this theme is joint work with Enrique Marquez and Mahesan Niranjan on Cascade Learning of deep networks, which allows a network to be grown from the bottom up.
  • Representational translation: with Pavlos Vougiouklis I’ve worked a lot on the problem of translating structured data into a representational space that can then be translated into a human readable text; in short we trained neural networks to translate sets of <subject, predicate, object> triples into natural language descriptions.
  • Acceleration features: with Yan Sun and Mark Nixon, I’ve developed novel representations for image sequences based on higher orders of motion such as acceleration and jerk. These representations can for example be used as an intermediary in recognition tasks.
  • Soft Biometric representation: with Nawaf Almudhahka and Mark Nixon, I’ve developed new forms soft-biometric representation that allow photographs of people to be recognised from verbal descriptions.

Understanding representation and taking inspiration from biology

As we work towards the goal of building artificial intelligence, it is important that we understand how our models work internally, and perhaps even utilise knowledge of biological systems in their design. In this space, there are three main directions that stand out:

  • Relating behaviours and internal representations of deep networks to biology: with Daniela Mihai and Ethan Harris, we’ve been working to understand what factors of a neural architecture cause the emergence of particular cell-level properties. In recent work we’ve explored how neural bottlenecks in artificial visual systems give rise to colour opponent cellular properties observed in real biological systems.
  • Investigating the emergence of visual semantics: Daniela Mihai & I have been exploring what factors cause visual semantics to emerge from artificial agents parameterised by neural networks when they play a visual communication game.
  • In ongoing work with colleagues as part of the EPSRC International Centre for Spatial Computational Learning, we’re considering if certain observations from biological neural networks, such as sparsity and overall architecture, could be used to help develop new ways of designing network models to better fit existing hardware, and better inform the development of future neural network hardware.

Applications of representation

  • Learning representations of aerial imagery: With colleagues from Ordnance Survey and Lancaster University I’ve spent a lot of time investigating new ways of learning representations that have applications in the geospatial intelligence domain. With Iris Kramer, we’ve been investigating how deep learning and representation learning technologies can be applied to allow for the discovery of archaeological sites from aerial imagery and LiDaR data.
  • Learning representations of scanned text documents. I was the Investigator of the Innovate UK funded Transcribe AI project, which looked at ways of learning representations of scanned textual documents that would allow for automated information extraction and reasoning over the information that was conveyed. 

I have also been involved in numerous other projects involving both machine learning and computer vision. For example, I was part of a team that innovated a system for scanning archaeological artefacts in the form of ancient Near Eastern cylinder seals using structured light.

Through all my research, I have made a commitment to open science and reproducible results. The published outcomes of almost all my work is accompanied with open source implementations that others can view, modify and run. A large body of the outcomes of my earlier research can be found in the OpenIMAJ software project (see http://openimaj.org), which won the prestigious 2011 ACM Multimedia Open Source Software Competition. OpenIMAJ is now used by researchers and developers across the globe, and in a variety of national and international organisations.

Teaching

I currently lead and teach on our popular research-led undergraduate Computer Vision and fourth year/MSc Differentiable Programming/Deep Learning modules. I am also part of the teaching team (with Adam Prügel-Bennett) on the Advanced Machine Learning module. In the past I have also taught the fourth year/MSc Data Mining module which I designed, and Scripting Languages/Cloud Application Development.

My work on designing and delivering these modules has been widely recognised: For Computer Vision, the students nominated me for the 2013-14 Excellence in Teaching Awards, and I won the faculty award for innovative teaching. In 2015 I was awarded a Vice Chancellors Teaching Award. The Computer Vision module was also shortlisted in the Blackboard and VLE Awards in 2017 and 2020, and my Data Mining module was shortlisted in 2016. 

I am actively involved in matters surrounding teaching in the School, such as engaging with education away days, education committee, and School reviews. In particular I was heavily involved in the review of machine learning teaching which led to the creation of new modules on Deep LearningNatural Language Processing and Machine Learning Technologies.

As Doctoral Programme Director, I am currently also responsible for the education of over 150 PhD students. Through these activities I contribute directly to education policy. I am also heavily involved in the organisation of outreach activities, including the ECS Taster Course

Together with Adam Prügel-Bennett and Mahesan Niranjan, I am also regularly involved in contract teaching of short courses on Machine Learning to both academia and industry.

I am a Fellow of the HEA and act as a mentor to ECS colleagues undertaking PGCAP and PREP.

Publications

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

Dataset

Contact

Share this profile FacebookTwitterWeibo
Back to top