Internship Topics
Prof. Helmut Prendinger
Thank you for your interest in our topics!
Objective:
The Objective of the Internship is a tangible result, such as:
· Research results that stand a chance of publication in a major venue (journal or conference), whereby the Intern assumes authorship or co-authorship
· Development or co-development of competitive software, or an intelligent system that can be demonstrated
Paper writing will be supported by Prendinger Lab.
Topic 1:
Research
Area: Deep Learning
Title of Research Topic: High-Speed Object Detection and Tracking onboard a Drone
Description: We investigate methods for collision avoidance of a drone flying
at high speed (>50km/h), with other drones or even birds. Our goal is to
develop a Deep Learning based component for detecting and tracking an obstacle
(i.e., other drones, birds) using vision sensors. The challenge is to achieve
high precision and high speed in obstacle detection and tracking.
We use the AirSim simulator to generate
synthetic data to train our models.
Here is an example of our current results: https://youtu.be/C22gN2WUCMk
We are working with several Deep Learning models, such as YOLOX, and Multiple Object Tracking (MOT) models, using the OpenMMLab toolbox.
We
are using different kind of input sensors:
·
RGB
camera
·
Stereo
camera
Research and Development:
· Research part:
o We train our DL models using different settings of hyper parameters.
· Development part:
o We implement different DL models on NVIDIA Orin and similar hardware platforms. We use DL frameworks such as Tensorflow or PyTorch.
Key
references:
·
Hou-Ning Hu, Yung-Hsu Yang, Tobias Fischer,
Trevor Darrell, Fisher Yu, Min Sun, Monocular Quasi-Dense 3D Object Tracking. IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, Issue
2, February 2023
https://arxiv.org/abs/2103.07351
·
Jinkun Cao, Xinshuo Weng, Rawal Khirodkar, Jiangmiao Pang, Kris Kitani,
Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking,
2022
https://arxiv.org/abs/2203.14360
·
Yifu Zhang, Peize Sun,
Yi Jiang, Dongdong Yu, Fucheng
Weng, Zehuan Yuan, Ping Luo, Wenyu Liu, Xinggang Wang, ByteTrack:
Multi-Object Tracking by Associating Every Detection Box, 2021
https://arxiv.org/abs/2110.06864
·
Zheng Ge, Songtao Liu,
Feng Wang, Zeming Li, Jian Sun. YOLOX: Exceeding YOLO
Series in 2021, 2021
https://arxiv.org/abs/2107.08430
·
Chao Qu, Ty Nguyen, Camillo J. Taylor, Depth
Completion via Deep Basis Fitting, 2019
https://arxiv.org/abs/1912.10336
Topic 2:
Research
Area: Machine Learning/Deep Learning/Cointegration
Title of Research Topic: Time Series Analysis for Bitcoin Market Prediction
Description: We aim to understand the predictability of cryptocurrencies,
such as Bitcoin or Ethereum.
Our
goal is to develop Machine Learning (ML) and Deep Learning (DL) models for
predicting the price of Bitcoin and other cryptocurrencies for successful swing
trading.
We consider several ML/DL models, such as:
· XGBoost
· Conformal Prediction
· Momentum Transformers, including Change Point Detection
We
apply methods for time series forecasting, such as:
·
Cointegration,
Transfer Entropy, Convergent Cross-Mapping
We
plan to use different kinds of input:
·
Candlestick
chart
·
Indicators,
such as Relative Strength Index (RSI)
·
Etc.
Research and Development:
· Research part:
o We train our DL models using different settings of hyper parameters.
· Development part:
o
We implement different DL models.
Key
references:
·
Hansika Hewamalage, Klaus Ackermann, Christoph Bergmeir,
Forecast Evaluation for Data Scientists: Common Pitfalls and Best Practices,
2022
https://arxiv.org/abs/2203.10716
·
Kieran Wood, Stephen Roberts, and Stefan Zohren, Slow Momentum with Fast Reversion: A Trading
Strategy Using Deep Learning and Changepoint Detection, 2021
https://arxiv.org/abs/2105.13727
·
Patrick Jaquart, David
Dann, Christof Weinhardt, Short-term bitcoin market
prediction via machine learning. The Journal of Finance and Data Science, Vol.
7, 2021, 45-66
https://www.sciencedirect.com/science/article/pii/S2405918821000027?via%3Dihub
·
Chengyi Tu, Ying Fan, Jianing Fan, Universal Cointegration and Its Applications, iScience Vol. 19, 2019, 986-995
Topic 3:
Research Area: Deep Learning
Title of Research Topic: Transformer-based Conditional Generative
Models
Description: Recent advancements in
stable diffusion models have demonstrated the potential for generating
high-resolution images from text queries using latent representations [3].
Further developments have expanded this approach to incorporate multiple
modality queries for improved sample conditioning [1,2].
This
project aims to conduct experiments with and improve the latest conditioned
generative and diffusion approaches, such as those described in [1,2]. Through
an analysis of these models and their intermediate results, we aim to identify
opportunities for improvement. This could include increasing the diversity of
generated outputs (mode collapse problem) or incorporating new constraints to
enhance sample generation.
Key references:
[1] controlnet: https://arxiv.org/abs/2302.05543
https://github.com/lllyasviel/ControlNet
[2] gligen: https://arxiv.org/abs/2301.07093
https://github.com/gligen/GLIGEN
[3] Stable diffusion: https://arxiv.org/abs/2112.10752
https://github.com/CompVis/latent-diffusion
https://github.com/CompVis/stable-diffusion
[4] wavenet: https://arxiv.org/abs/1609.03499
https://github.com/vincentherrmann/pytorch-wavenet
https://openreview.net/pdf?id=rJe4ShAcF7
[5] https://arxiv.org/abs/1706.03762
E-mail:
helmut @ nii [Address
format is: username@nii.ac.jp]
Personal
Website: http://research.nii.ac.jp/~prendinger/
LOOKING FORWARD TO HEARING FROM YOU!