Semi-Supervised Deep Learning

This workshop covers various semi-supervised learning techniques available in the literature. The workshop consists of a lecture introducing at a high level a selection of techniques that are suitable for semi-supervised deep learning. We discuss how these techniques can be implemented and the underlying assumptions they require. The lecture is followed by a hands-on session where attendees implement a semi-supervised learning technique to train a neural network. We observe and discuss the changing performance and behaviour of the network as varying degrees of labelled data is provided to the network during training.

Read More

Introduction to High Performance Computing

This workshop will provide an overview of what a High-Performance Computing system looks like. Users will get hands on experience with a small cluster, and learn how to use Linux command-line tools to write Slurm scripts which they can submit as simple batch jobs. After the workshop, users will be able to use what they learn on much bigger systems.

Read More

Introduction to TensorFlow and Deep Learning

Register now for our upcoming Introduction to TensorFlow and Deep Learning workshop being held on Friday, 12 March delivered online via Zoom from 10:00am – 3:30pm AEDT. The workshop will be led by instructors from the Monash eResearch centre and the Data Science and AI Platform.

Read More

Introduction to Python

This hands-on workshop aims to equip participants with the fundamentals of programming in Python and give them skills needed to apply data analysis approaches to their research questions. The workshop will be taught in a similar style to Data Carpentry workshops. Data Carpentry’s mission is to train researchers in the core data skills for efficient, shareable, and reproducible research practices.

Read More

AI with Deep Learning

This intermediate-level workshop introduces how to implement and integrate the latest deep learning algorithm into your research by using the HPC cluster. 

Read More

Introduction to Python

This hands-on workshop aims to equip participants with the fundamentals of programming in Python and give them skills needed to apply data analysis approaches to their research questions. The workshop will be taught in a similar style to Data Carpentry workshops. Data Carpentry’s mission is to train researchers in the core data skills for efficient, shareable, and reproducible research practices.

Read More

Data Fluency: Introduction to High Performance Computing

This workshop will provide an overview of what a High-Performance Computing system looks like. Users will get hands on experience with a small cluster, and learn how to use Linux command-line tools to write Slurm scripts which they can submit as simple batch jobs. After the workshop, users will be able to use what they learn on much bigger systems.

Read More

Deep learning for natural language processing

This workshop introduces natural language as data for deep learning. We discuss various techniques and software packages (e.g. python strings, RegEx, Word2Vec) that help us convert, clean, and formalise text data “in the wild” for use in a deep learning model. We then explore the training and testing of a Recurrent Neural Network on the data to complete a real world task. We’ll be using TensorFlow v2 for this purpose.

Read More

Introduction to R

This foundational-level hands-on workshop introduces the basics of working with data using the R language. R provides many ways to query, explore, and visualise data, make models from data, and perform statistical tests. By using a computer language, R can do a greater range of things that can be done with a spreadsheet or a point and click statistics application.

Read More

Introduction to TensorFlow and Deep Learning

This workshop is an introduction to how deep learning works and how you could create a neural network using TensorFlow v2. We start by learning the basics of deep learning including what a neural network is, how information passes through the network, and how the network learns from data through the automated process of gradient descent. You would build, train and evaluate your very own network using a cloud GPU (Google Colab). We then proceed to look at image data and how we could train a convolution neural network to classify images.

Read More