The University of Texas at Austin and tech giant Cisco struck a research agreement that will begin with an emphasis on artificial intelligence and machine learning before branching out into additional technologies in the future.

As part of the five-year partnership, Cisco Research will provide funding and expertise for four AI/ML research projects and one cybersecurity research project over the next year. The researchers from the Cockrell School of Engineering and College of Natural Sciences will delve into several different areas of AI and ML, including Internet of Things, computer vision, training learning networks and more.

The goal is for Cisco and UT Austin to add more projects to the alliance on a yearly basis. Other areas of emphasis the organizations are looking at include natural language processing, augmented/virtual reality, cybersecurity, edge computing and more.

“We work closely with our strategic research partners to understand their unmet needs and we identify mutual areas primed for foundational and applied research collaborations,” said John Ekerdt, associate dean for research at the Cockrell School and a professor in the McKetta Department of Chemical Engineering. “We are excited to welcome Cisco as our latest strategic research partner to UT and the Cockrell School of Engineering.”

“Cisco Research conducts and fosters cutting-edge research in areas of strategic interest to Cisco. Since 2020, we have started partnering with several top universities with broad research strengths to fund and work with faculty on interesting projects that can make a material impact to Cisco’s future innovations. As part of this initiative, we are proud to have entered this partnership with UT Austin, leveraging which we are hoping to fund several research projects over the next few years.”  said Dr. Ramana Kompella, the Head of Cisco Research.

Here is a look at the first five projects included in the new partnership:

Radu Marculescu, a professor in the Department of Electrical and Computer Engineering, will develop a new approach to handling issues of data, latency and power usage in large networks of different types of devices using a federated learning algorithm. Federated learning allows many devices on a network to collaboratively learn training and prediction models while keeping all the data confined to individual devices. The research aims to improve the scalability and performance of federated learning, especially when it comes to large dynamic networks.

Aditya Akella, professor of computer science in the College of Natural Sciences, is tackling challenges that come with large-scale AI networks in multi-tenant setting — where single copies (instances) of software applications serve many customers. The size of these models can create network bottlenecks that slow down scaling and speed of training them. Akella plans to develop a “communication substrate” that can eliminate these network bottlenecks and make sure information gets transferred quickly to facilitate the growth of these learning models.

Electrical and computer engineering assistant professor Sandeep Chinchali is developing a learning platform for Internet of Things devices to improve their computer vision capabilities by pulling data from their own video streams for training purposes. The aim is to build new machine learning algorithms that can automatically pick out valuable training data from these IoT devices. And the researchers aim to balance accuracy with cost of network bandwidth that the process will involve. 

Computer science professor Hovav Shacham is zeroing in on a security vulnerability that comes with using a third-party library to add features to a software system. The key to rectifying it is restricting the memory the library can read and write. WebAssembly, a type of code that runs in modern web browsers, is a strong candidate to overcome this vulnerability, according to the researchers, because it is already restricted in terms of the memory processes it can access.

Zhangyang “Atlas” Wang, an electrical and computer engineering assistant professor, is examining smaller sub-networks within larger learning models that require significant resources to run. These smaller subnetworks require less resources, and Wang’s research has found that they can perform as well as larger networks. Essentially, researchers could have used smaller subnetworks from the very beginning, but they didn’t know which ones to choose. Wang’s research will focus on finding the right subnetwork for different tasks.