Sign Up
|
Login
Sign Up
|
Login
Post Job
MENU
Home
Job Seekers
Search Jobs
Browse
Post Resume
Questions & Answers
Employers
Services
Career Fair Pricing
Advertise
Questions & Answers
Search Resumes
Post Job
Career Fair Calendar
Career Fairs
Career Fair Pricing
Sign Up
Login
Research Scientist Intern, Vision Language Model (PhD)
Job Posted
1/20/2025
Meta Platforms, Inc.
Redmond, WA 98053
United States
Category
Other
Apply for Job
Job Description
Reality Labs Research is looking for an intern to help us develop the next generation assistance systems that guide the users in contextual and adaptive future AR/VR systems. In particular, we are seeking candidates who have experience with either of the following: vision language model, LLM interpretability, multimodal LLM. Work with researchers to help enable their work across the following research disciplines: Improving the performance of VLM in product-related scenarios Building white-box mechanisms to better evaluate the capabilities of VLMs Our internships are twelve (12) to twenty-four (24) weeks long and we have various start dates throughout the year. Some projects may require a minimum of 16 consecutive weeks.Currently has or is in the process of pursuing a PhD in machine learning, computer vision, speech processing, applied statistics, computational neuroscience, or relevant technical field. Excellent research skills involving defining problems, exploring solutions, and analyzing and presenting results. Proficiency in python and machine learning libraries (pytorch, numpy, scikit-learn, scipy, pandas, matplotlib, etc.). Deep understanding of vision-language models, supported by quality first-authored publications in related domains. Interpersonal skills: cross-group collaboration and cross-culture collaboration. Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment. Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as NeurIPS, ICML, ICLR, CHI, UIST, IMWUT, CVPR, ICCV, ECCV, AAAI, ICRA, SIGGRAPH, ETRA, or similar. Experience with VLM/LLM training/fine-tuning. Experience on solving traditional CV problems, including but not limited to hand/body pose estimation, object detection, image classification/segmentation, image/video understanding, etc. Experience working and communicating cross functionally in a team environment. Intent to return to degree program after the completion of the internship/co-op. Availability for minimum 16 consecutive week internship.
View Count
0