Category: Talk
-

MANY posters, demos, awards at NELMS IoT conference
—
Congratulations to many students from TEA lab presenting their work and getting recognition!
-

Sam, Yuang, Zhenjiang present posters at UF AI Days 2024
—
On October 29, 2024, the three students presented posters about the following papers: Zhenjiang Mao, Dong-You Jhong, Ao Wang, Ivan Ruchkin. Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving [Arxiv] [Slides]. Robot Trust for Symbiotic Societies (RTSS) Workshop (co-located with ICRA 2024), Yokohama, Japan, 2024. Zhenjiang Mao, Siqi Dai, Yuang Geng, Ivan Ruchkin. Zero-shot Safety Prediction…
-

Ivan presents calibrated visual safety prediction at TACPS workshop at ESWEEK
—
Ivan gave an invited talk “How Safe Will I Be Given What I See? Calibrated Visual Safety Chance Prediction with (Foundation) World Models”. The discussion was very active and generated sufficient questions for the rest of Zhenjiang’s PhD. Relevant links: Talk page Workshop page Conference page Slides Talk abstract: In safety-critical autonomous systems, safety prediction…
-

Yuang presents high-dimensional reachability at FM 2024
—
Yuang Geng presented his work on reachability for vision-based neural-network controllers at the 26th International Symposium on Formal Methods (FM). Reportedly, the attendees are curious about the mapping between states and images. Citation and further materials:
-

Zhenjiang presents calibrated safety predictors at L4DC 2024
—
Zhenjiang Mao presented his work on learning-enabled safety prediction (poster, paper) at the 6th Annual Conference on Learning for Decision and Control (L4DC 2024) in Oxford, UK. Reportedly, the attendees like math more than he does. Citation: Zhenjiang Mao, Carson Sobolewski, Ivan Ruchkin. How Safe Am I Given What I See? Calibrated Prediction of Safety…
-

Zhenjiang presents two papers and a poster at ICRA 2024
—
The papers were on foundation world models and language-enhanced OOD. The audience response was, reportedly, positive and encouraged the implementation on physical robotic systems. Citations:
-

Ivan presents NN repair with preservation at ICCPS 2024
—
In the first presentation of ICCPS 2024, Ivan showcased a method to repair a neural network controller while preserving its verification results. Citation: Pengyuan Lu, Matthew Cleaveland, Oleg Sokolsky, Insup Lee, Ivan Ruchkin. Repairing Learning-Enabled Controllers While Preserving What Works [Arxiv] [Github] [Slides]. In Proceedings of the International Conference on Cyber-Physical Systems (ICCPS), Hong Kong, China, 2024.
-

Invited talk at the DACPS workshop & ETH Autonomy Talks
—
Update 1: an extended version of this talk was given at a UF MAE Affiliate Seminar. The recording can be found here (UF login required). Update 2: another version of this walk was given at the ETH Autonomy Talks (video). Update 3: yet another version of this talks was given as a CNEL Seminar. The…
-

Causal NN controller repair presented at ICAA’23
—
Shown above is a 5-step workflow of our causal repair: (1) Extract the behaviors of a learning component as an I/O table. (2) Encode the dependency of the desired property outcome on the I/O behaviors with a Halpern-Pearl model. (3) Search for a counterfactual model value assignment, revealing an actual cause and a repair. (4)…
-

Conservative safety monitoring presented at NFM’23
—
Shown above is our conservative monitoring approach that leverages probabilistic reachability offline and combines it with calibrated state estimation. Citation: Matthew Cleaveland, Oleg Sokolsky, Insup Lee, Ivan Ruchkin. Conservative Safety Monitors of Stochastic Dynamical Systems [ArXiv] [Springer] [Slides]. In Proceedings of the NASA Formal Methods Symposium (NFM), Houston, TX, 2023.