Copy
Robotic Vision Quarterly Newsletter - October 2016
View this email in your browser

creating robots that see

 

RoboVis 2016, our Centre's third annual conference, was held in Lorne, Victoria.


DIRECTOR'S MESSAGE

Welcome to our first public newsletter. We hope that this is an informative and enjoyable way to keep up to date with what’s happening in our Centre's quest to create robots that see.

 
We are an Australian research centre that leads the world in undertaking transformational research tackling the critical and complex challenge of applying computer vision to robotics. We believe that robotic vision is the key to unleashing the full potential of robots and fundamentally changing the way we live and work.
 
We have a unique opportunity as a Centre of Excellence, funded by the Australian Research Council, to not only conduct research in the exciting new field of robotic vision but to also build research capacity, develop the research and industry leaders of tomorrow, engage with the community, and to help people learn about robotics, vision and coding. We invite you to join us as we establish a vibrant international robotic vision community in partnership with four Australian universities (QUT, ANU, University of Adelaide and Monash), CSIRO and five international organizations (Oxford University, Imperial College London, INRIA, ETH Zurich and Georgia Tech). 
 
We believe that the ability to see, to visually understand the complex world around us and respond to it, is critical for the next generation of robots that will perform useful work in agriculture, environmental monitoring, healthcare, infrastructure inspection, construction, manufacturing and so on. If you have problems that robotic vision might solve, please get in touch. If you’d like to help us translate our science into impact and wealth creation, please let me know.
 
Please join us on our journey as we explore the possibilities of future technologies.  I’m sure you’ve got more than enough email and things to read so I’d like to make this an enjoyable low-traffic experience for you. The newsletter will come out every three months. You can of course opt out, and you’re very welcome to forward it to others who might be interested and they can opt in.

If you have comments about the newsletter, questions about what we do or how to engage with us, please feel free to email me.
 

Professor Peter Corke
Centre Director

TRANSFORMING COMMUNITIES

Associate Investigator Matt Dunbabin and Research Fellow Feras Dayoub pictured with the COTSbot

CENTRE RESEARCHERS TEAM UP WITH THE GREAT BARRIER REEF FOUNDATION TO WIN THE GOOGLE IMPACT CHALLENGE AUSTRALIA'S PEOPLE'S CHOICE AWARD

 
Congratulations to Centre Researchers Dr Matthew Dunbabin and Dr Feras Dayoub. Their project with the Great Barrier Reef Foundation to create a low-cost ‘robo reef protector’ won the people’s choice award in the Google Impact Challenge Australia. The award is worth $750,000.
 
The foundation says the team will build on the researchers’ successful COTSbot platform, which was designed to try to tackle one of the greatest threats to the reef, the crown-of-thorns starfish, or COTS. The COTSbot identifies a crown-of-thorns starfish (COTS) and injects it with a solution to kill it.
 
“To be recognised in this way is pretty awesome,” said Dr Dunbanin, an Associate Investigator with the Centre. “We learnt a lot from COTSbot, what works, what doesn’t work. What we learnt will be brought into the new design.”
 
The team now wants to create the RangerBot, a low-cost, more versatile version of the COTSbot. It will do that by shrinking COTSbot, adding a suite of vision-based sensors and developing a range of attachments to tackle various monitoring and management activities along the Great Barrier Reef.
 
“This is a fantastic opportunity that opens the door to building more robots that will help protect the Reef,” said Dr Dayoub, a Research Fellow with the Centre. “This motivates us to answer the support of the people who voted for us by working very hard towards building a great robot, the RangerbBot.”
 
The Google Impact Challenge Australia was created to help not-for-profit organisations develop technologies that can help tackle the world’s biggest social challenges. Certainly, the Great Barrier Reef and its health is an important challenge facing Australia.
 
“We wouldn’t be here without the support of the Great Barrier Reef Foundation, an organisation truly dedicated to reef conservation,” said Dr Dunbabin.
 
The funding will also allow the team to drive down the cost of building the robot, making it affordable for communities.​

AGBOT II - The Robotic Weed Slayer


AGBOT II is a fully-autonomous weed-killing robot that could cut the cost of weed control by 90 per cent, potentially saving the farm sector $1.3 billion a year. Watch this video to see the robot in action. News story here
TRANSFORMING INDUSTRY

Chief Investigator Michael Milford and Research Affiliate Thierry Peynot. Image courtesy of QUT.

QUEENSLAND GOVERNMENT AWARDS MAJOR GRANT TO HELP ROBOTIC VISION RESEARCHERS WORK WITH CATERPILLAR ON AUTOMATION

 
The Queensland Government awarded a team of Chief Investigator Michael Milford and Research Affiliates Thierry Peynot and Ben Upcroft, $428-thousand in funding as part of its Advance Queensland Innovation Partnerships program, to help Caterpillar take its mining equipment and automation technology to the next level. The funding, combined with other funding from QUT, Caterpillar and mining3, will help Milford and his team develop technologies that could ultimately enable the automation of underground mining vehicles. Right now, lasers are being used to help with attempts to automate vehicles involved in underground mining operations. Milford and his team will develop a camera-based positioning system on mining vehicles to help track them in these harsh, underground environments. News story here Media release here 
TRANSFORMING ROBOTIC VISION
Chief Investigator Tom Drummond

NVIDIA COLLABORATION PROVIDING “GAME-CHANGING” TECHNOLOGY TO THE CENTRE

If we want robots to be able to do things in the real world, they need to be able to react to what they’re seeing in real time.

Thanks to a new collaboration with NVIDIA, our Centre now has the computing power it needs for its research to help robots learn to see.The central point of this collaboration is taking place at our Monash University node in Melbourne. This year NVIDIA made Monash a GPU Research Centre. NVIDIA, of course, is best known for designing Graphics Processing Units (GPUs) for the gaming market, and has a significant share of the GPU market. About 10 years ago, scientists worked out there was more computing power inside the graphics cards they were buying from NVIDIA, than the computers they were putting them in to. “The cards are about 10 to 20 times faster than a whole computer,” says Chief Investigator Prof Tom Drummond (pictured). It’s that type of computing power that’s needed to help robots see and react in real time.

​ ​
“With a single-threaded code going from a Conventional Processing Unit (CPU) over to GPU, we may see a 150-fold increase in speedup in computing. That’s a game changer,” says Tom. “It means we can do things in robotics, which has to be real-time, that we couldn’t otherwise.”

Also helping the Centre is its affiliation with Monash University. Monash has committed to give the Centre 100-thousand core hours of use of its new, Massive-3 (M3) computer to support Centre research. That gives Centre researchers a lot of access to NVIDIA GPU computing capability.

To help the other nodes, the Centre also put in a Linkage Infrastructure, Equipment and Facilitates (LIEF) scheme bid for research infrastructure funding, and that was used to purchase a bunch of NVIDIA GPU’s for deep learning that are now being used at The University of Adelaide and Queensland University of Technology (QUT).

In addition, the NVIDIA collaboration provides for some key hardware for the Centre. For instance, there is a collaboration between Monash and QUT within the Centre to develop a Vision Operating System. The system will enable large sets of robots and cameras to contribute resources together to solve big problems. NVIDIA is helping make that happen with the donation of its TX1 developer boards. TX1 has the power to process image feeds, and the software tools to instantly analyse and provide context to what’s being seen.

“To ‘teach’ robots how to see and understand our world we need to write algorithms that can process huge numbers of parameters at the same time. By parameters we mean that the robots need to understand structure, so they can ‘see’ the world in three dimensions, but they also need to ‘know’ what it is they are seeing and where they are in relation to these things,” said Tom. Finally, NVIDIA is also providing access to its education program so Centre researchers can learn how to use the hardware. That’s beginning at Monash, but is expected to move to our other nodes at QUT, Adelaide and the Australian National University (ANU). 

Chief Investigator Stephen Gould

RESEARCH PROGRAM SPOTLIGHT

 
UNDERSTANDING - Semantic Representations (SR)
Semantic Representations (SR) is one of the Centre's five core research programs. The aim of the program is to help the robot understand its environment through visual perception and how to act in that environment.
 
There are several key projects that make up the whole of this program. They include teaching the robot about objects and activities through the use of images and videos. If the robot is able to give a label to the object, it’s then able to understand the “semantics” of the object. In other words, it’s able to understand not only what the object ‘is,’ but ‘what it can do.’

*Not only is it important to recognise objects, but robots also need to understand their context – the background regions and geometry -- and also to be able to track these objects and their location in the environment over time. One of the projects in the program addresses this “scene understanding” problem.

​ ​ 
Activity recognition also helps the robot understand how it’s supposed to act in a certain environment. For example, if a person is on the phone, the robot might know not to interrupt. If it’s an elderly person standing up from a chair, then perhaps the robot can offer some assistance. It’s also important that if we want robots to be able to work alongside humans, they need to understand how humans move. This type of understanding is also important for teaching the robot how to do tasks. Wouldn’t it be great if a robot could assemble an Ikea flat pack piece of furniture? By showing the robot how a person assembles the furniture, and then seeing multiple examples of other furniture being assembled, it’s hoped the robot could then infer how to assemble a new piece of furniture from a set of abstract instructions.

The last key component to this program is teaching the robot to communicate what it sees in its environment through language. That is, we’re looking at equipping robots with the ability to explain what they see in their environment in natural language, say English.

This program couples strongly to other research programs in the Centre, including the use of machine learning algorithms and robust vision to enhance the robot’s understanding.​
DEVELOPING OUR FUTURE LEADERS

Centre Researchers Markus Eich, Trung Pham Feras Dayoub, Sareh Shirazi and Fahimeh Rezazadegan at the Leadership Training Day during RoboVis 2016. Image credit: Michael Milford 


We would one day like to see companies in the fledgling robotic vision industry founded by our graduates and alumni and are keen to give our researchers the skills and support to enable them to do this. Our leadership training commenced for our research fellows and PhD researchers at RoboVis in September 2016. Led by qualified organisational psychologists, David Whittingham and Jo Karabitsos (Evexia), our researchers were encouraged to complete a Career Development Plan and were shown some of the common tools of leadership. 

Reflecting on the training day some comments from our researchers on what they learned include;

“Supervisors must find a balance between being encouraging with setting boundaries”

“Individual differences in communication styles to more effectively get your message across”

“It is important to always remind people of the big picture so they don’t get too caught up in detail”

​ ​ 
Our Leadership program is based on the well-tested Researcher Development Framework (RDF) developed in the U.K. by vitae with one important addition. The RDF considers four capability domains as important in developing the careers of junior researchers:

1.    Knowledge and intellectual abilities
2.    Personal effectiveness
3.    Research governance and organisation
4.    Engagement, influence and impact

We consider one additional sphere of capability important: 

5.    Entrepreneurship. 

In 2017 we aim to have two further Leadership training sessions for all of our junior researchers as well as training at each node. To support this we are also introducing a novel system of Centre Professional Development (CPD) points to guide everyone through the process of developing skills across a broad range of areas so they can fulfil their potential to become the research and industry leaders of tomorrow.
EDUCATION

Centre Director, Professor Peter Corke. Image credit: QUT


In 2015, Peter and the online specialists in QUT’s e-Learning Services created two massive open online courses (MOOCs) titled “Introduction to Robotics” and “Robotic Vision”.  They each ran twice and reached over 30,000 students in 150 countries.  These courses were awarded the prestigious Wharton QS Stars Reimagine Education Awards (Gold & Silver).
 
The courses, aimed at the undergraduate level, required some knowledge of linear algebra, basic control theory and programming.  They included formative and summative assessment through multiple-choice quizzes, MATLAB-based programming assignments, and a robot-building project.
​ ​ 
We had great support from MathWorks, who provided a free MATLAB licence to all students and in-course support through a specialised MathWorks teaching team member.  Springer also made sections of the Robotics, Vision & Control textbook available for free to students, and for those who wished to purchase the text book, a discounted rate for both the electronic and hard copy.  To deliver its MOOCs, QUT utilises the Open edX software platform through EdCast, a US-based based learning systems provider.
 
The courses are free and we currently have the Robotic Vision course open for enrolment.  If you’d like to experience our internationally distinguished courses, you can enrol via this signup page.
 
This year, we have designed some of the Introduction to Robotics course into a number of shorter courses for release on the UK-based FutureLearn platform.  The courses will be accessible to a wider audience with reduced technical depth, due to their shorter course length.  An example of this is considering robots that move in two dimensions rather than three.  The first course in the Introducing Robotics program tackles the basics of what robots are, what they are not, why we need them and the implications of robots in society.  It starts on 7th November and you can enrol via this signup page

2016 HIGHLIGHTS

European Conference on Computer Vision (ECCV) 2016
The Centre had 3 great successes at ECCV held in Amsterdam in October. ECCV is a biennial computer vision conferenced considered to be one of the top three in the field, along with Computer Vision and Pattern Recognition (CVPR) and International Conference on Computer Vision (ICCV). 

Centre researchers Gustavo Carneiro (CI), Hongdong Li (CI), Laurent Kneip (Associate Investigator) Peter Anderson (PhD Researcher) and Ravi Garg (University of Adelaide Research Fellow) travelled to Amsterdam and were able to celebrate these successes in person:

The prestigious Koenderink Paper Prize was awarded to Ed Rosten and CI Tom Drummond for their 2006 ECCV paper “Machine learning for high-speed corner detection”. This prize is given to fundamental contributions in computer vision that have stood the test of time. Google Scholar shows 2400+ citations for this work.

Gao Zhu, Associate Investigator Fatih Porkili and CI Hongdong Li won 1st place and “Best Performing Tracker Award” at the Visual Object Tracking-TIR Challenge.
 
Partner Investigator Professor Andy Davison and his ICL team Hanme Kim and Stefan Leutenegger won the ECCV 2016 Best Paper Award with the paper “Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera”. 
Amazon Picking Challenge
The Centre was 1 of 16 teams selected from 30 applicants to take part in the 2016 Amazon Picking Challenge and came in 6th place. The event was held in conjunction with Robocup 2016 in Leipzig, Germany.  The challenge was to robotically and autonomously pick objects from a shelf into a box and also from a box back onto the shelf. Six of the 19 person team of Centre Researchers (Research Fellows Juxi Leitner and Niko Suenderhauf, PhD Researcher Adam Tow, and Undergraduate Students Matthew Cooper, Jake Dean and Lachlan Nicholson) working on the challenge flew to Germany to compete and our result was due to a lot of hard work and long hours..The 2017 dates have just been announced by Amazon Robotics. The challenge will now be called Amazon Robotics Challenge and will be held at RoboCup in Nagoya, Japan at the end of July 2017. We look forward to entering again next year.
 
Eureka Award Finalists
The Centre had 2 finalists in the 2016 Australian Museum Eureka Prizes. Centre Chief Investigator Michael Milford was a finalist for the Eureka Prize for Outstanding Early Career Researcher. Our COTSbot team of Matt Dunbabin, Feras Dayoub, and Peter Corke were finalists for the Eureka Prize for Environmental Science.

The Eureka Prizes are Australia’s most prestigious science awards, and celebrate excellence in several areas, including research and innovation. Media release here

Our Centre Researchers share their experiences from the 2016 Amazon Picking Challenge in this video
QUT Environmental Robotics, finalist 2016 Eureka Prize for Environmental Research
Associate Investigator Matt Dunbabin talks about the COTSbot protecting the reef in this Eureka Award finalist video
Chief Investigator Michael Milford talks about his research bridging the divide between robotics and neuroscience in this Eureka Award finalist video
Robotics Science and Systems (RSS) Workshop
 
Centre Research Fellows Niko Suenderhauf and Jurgen (Juxi) Leitner ran a highly successful workshop at the Robotics Science and Systems (RSS) Conference held at the University of Michigan. The workshop titled "Are the Skeptics right? Limits and Potentials of Deep Learning in Robotics" attracted a crowd of nearly 200 and received a great response on social media. You can view the workshop introduction and talks here
 
Robotic Vision Summer School (RVSS)






 

Our second summer school was held in April, again at ANU's beautiful Kioloa Campus. The week long event again included technical sessions and workshops and we were very fortunate to have PI Paul Newman from the Mobile Robotics Group, Oxford , PI Frank Dellaert from Georgia Tech, Stefan Williams from the Australian Centre for Field Robotics, University of Sydney, and Jana Kosecka from George Mason University as guest speakers for the event.
RoboVis 2016


Pictured L to R: Niko Suenderhauf, Alex Martin, Bohan Zhuang, Zhibin Liao, Viorela Ila and Trung Pham.
Our third annual Centre conference, RoboVis, was held in Lorne, Victoria at the end of September. We were fortunate to have members of our Centre Advisory Committee and End-User Advisory Board join us. The conference included technical talks spanning the range of our research programs, as well as the ever popular demo session where we saw some Centre technologies in action. Innovations this year included a three-minute thesis competition where PhD researchers gave pitches on how their research would change the world and a Centre awards ceremony. The awards recognized exceptional performance in raising our Centre’s profile and collaborative endeavours. Photos from RoboVis can be viewed on Flickr
 
Copyright © 2016  Australian Centre for Robotic Vision. All rights reserved.
As you are associated with the Australian Centre for Robotic Vision we have added you to our mailing list.

Our mailing address is:

Centre Headquarters
Australian Centre for Robotic Vision
S Block Level 11 Room 1105 
QUT Gardens Point Campus
2 George Street
Brisbane QLD 4001
Australia

Add us to your address book

You can update your preferences or unsubscribe from this list

 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Australian Centre for Robotic Vision Headquarters · QUT Gardens Point Campus, S Block 1105 · 2 George Street · Brisbane, Qld 4001 · Australia

Email Marketing Powered by Mailchimp