Hi, My name is Joshua Owoyemi. I work as a Research Engineer and Data Scientist. I have background in engineering and I have more than 3 years of post-PhD experience working in the industry consulting with clients and developing machine learning solutions for applications such as autonomous driving, visual inspection, human-computer interface, medical diagnosis, and drug discovery. I work predominantly with Python, using frameworks such as Pandas, Scikit-learn, Pytorch, and Tensorflow for various machine learning tasks. I am also proficient with Git source control, containerization with docker, and Agile software and application development. Take a look at my profile below and if you have any question(s), you can send me an email at tjosh.owoyemi[at]gmail[dot]com. 😀
Profile Links
GitHub |
Current Roles
Research Engineer at Elix Inc Tokyo, Japan:
- Researching and developing new models for solutions in computer vision and drug discovery.
- Developing machine learning applications for specific client’s needs in areas such as object recognition, anomaly detection, disease diagnosis, de novo molecular generation and properties prediction
Online Content Creator, and Consultant:
See my LinkedIn profile for previous roles.
Selected Publications
Ayomide Owoyemi, Joshua Owoyemi, Adenekan Osiyemi, Andrew Dallas Boyd (2020). Artificial Intelligence for Healthcare in Africa. Frontiers in Digital Health, 2, 6. DOI
Joshua Owoyemi, Naoya Chiba, Koichi Hashimoto (2019). Discriminative Recognition of Point Cloud Gesture Classes Through One-Shot Learning. 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 2019, pp. 2304-2309. (IEEE Xplore)
Joshua Owoyemi, Koichi Hashimoto (2018). Spatiotemporal Learning of Dynamic Gestures from 3D Point Cloud Data. 2018 IEEE International Conference on Robotics and Automation (ICRA) May 21-25, 2018, Brisbane, Australia. pp.5929-5934, 2018. (https://arxiv.org/abs/1804.08859v1)
Cherdsak Kingkan, Joshua Owoyemi, Koichi Hashimoto (2018). “Point Attention Network for Gesture Recognition Using Point Cloud Data”, 29th British Machine Vision Conference, September 3-6, Newcastle, England, pp.118.1-118.13, 2018. 1st and 2nd Author have equal contribution. (pdf)
Joshua Owoyemi, Koichi Hashimoto (2017). Learning Human Motion Intention with 3D Convolutional Neural Network. 2017 IEEE International Conference on Mechatronics and Automation August 6-9, Takamatsu, Japan. pp.1810-1815, 2017. (IEEE Xplore)
Other Projects
-
Beadaut (Web App): A professional guidance and learning platform offering learning programs to acquire technical skills for entry level professionals in areas such as Data Analysis, Digital Marketing, Product Management, Media and Software Engineering.
-
Robot Control and Manipulation for Liquid Pouring (Video): This involved developing a point cloud gesture-based robot operation and a manipulation strategy for a robot arm to pour liquids without spilling. Simulation in with ROS and Gazebo to validate pouring strategy.
-
Upper Body Point Cloud Gestures Dataset (UBPG): This is a point cloud based gesture dataset captured by the Kinect camera using the Point Cloud Library. It contains 9 classes of gesture with 1 class for No Gesture. This dataset capture classes of dynamic gestures through a 3 dimensional data representation posing challenges of patter recognition in the spatial and temporal space. It is free to be used for research purposes.
Presentations and Talks
-
Getting Started: Hardware and Software (Video). A Presentation at the TEDx Tohoku University AI Salon Sendai, Japan.
-
Fast Motion Inference Learning with One-Shot Learning from Class Embedding. The 5th Case Western Reserve University - Tohoku University Joint Workshop, Sendai, Japan, August 2nd-3rd, 2018.
Awards
Japan Ministry of Education, Culture, Sports, Science and Technology Scholarship (2015-2019)