referry - Job Search Platform Logoreferry
Vedi tutte le opportunità

Korean LLM Quality Analyst - USA or South Korea (Remote)

6 ore fa|Korea, Republic of, United States|$15/ora|Freelance|<3 anni di esperienza|Turing
Communication

💡 Suggerimento per la candidatura: Cliccando su "Candidati gratuitamente su Braintrust" verrai reindirizzato al sito ufficiale di Braintrust. È 100% gratuito per te e ci aiuta a sostenere la nostra piattaforma tramite bonus di segnalazione.
⚠️ Nota sulla traduzione: Queste informazioni sono state tradotte con l’AI. In caso di imprecisioni o ambiguità, fa fede la versione originale in inglese.

Role Overview

About Turing:

Based in San Francisco, California, Turing is the world's leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in coding, reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L

Role Overview:

As an AI Quality Analyst, you will evaluate a new personalization feature for Gemini. You will assess how well the model uses information from your past Gemini conversations, Gmail, Google Search, and YouTube activity to make responses more relevant and helpful. This role requires a unique blend of creativity and analytical rigor. You will actively design prompts from the perspective of your own personal experiences. You will then use your analytical skills to assess the quality of the model's personalized responses, evaluating dimensions like Grounding, Integration, and Helpfulness.

Key Qualifications

  • Korean Proficiency: Ability to read and write in Korean with a high degree of comp, as Korean is the focus language for this project.
  • Schedule Flexibility: Full-time availability in your local time zone is required. We are staffing a global, 24-hour operations team.
  • Exceptional Analytical Thinking: Demonstrate ability to evaluate nuanced and ambiguous AI responses, specifically assessing personalization quality.
  • Creative Prompt Engineering: Experience in designing creative, multi-turn starting prompts based on personal context to thoroughly test the model's capabilities.
  • Strong Evaluation Acumen: Understanding of personalization concepts, including the ability to identify incorrect personalization, poor inferences, and forced connections.
  • Meticulous Attention to Detail: The ability to review Side-by-Side (SxS) model responses and spot subtle differences in naturalness and overnarrating.
  • Excellent Written Communication: Superior ability to write clear, concise, and structured rationales for model rankings, explicitly referencing specific turn numbers.
  • Feedback: Ability to provide constructive feedback and detailed annotations.
  • Communication: Excellent communication and collaboration skills.
  • Independence: Self-motivated and able to work independently in a remote setting.
  • Technical Setup: Desktop/Laptop set up with a good internet connection.

Description:

  • In this role, you will be part of a dynamic team focused on evaluating the quality of personalized AI interactions. Your day-to-day work will involve:
  • Designing and executing multi-turn conversational prompts (typically 1-5 turns) that require the AI to utilize your personal information and experiences.
  • Evaluating model responses based on your intent from the starting prompt, checking if the personalization was appropriately applied.
  • Analyzing responses for Grounding issues, ensuring claims about you are supported by evidence and not flawed inferences or hallucinations.
  • Assessing Integration quality to ensure personal data is woven naturally into the response without robotic "overnarrating".
  • Rigorously evaluating and stack-ranking two model responses side-by-side (SxS) to determine which is overall more helpful, easy to use, and enjoyable.
  • Writing clear, defensible rationales for your comparisons, explicitly referencing where issues or positive aspects occurred in the conversation.
  • Extracting and verifying "Debug Info" from the model to confirm that chat summaries and data sources were properly utilized.
  • Maintaining strict data hygiene by deleting evaluation conversations to prevent them from polluting your future chat history.

Education & Experience

  • BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field).
  • Experience in data annotation, AI quality evaluation, content moderation, or a related role is strongly preferred.

Offer Details:

  • Commitments Required: at least 4 hours per day and minimum 30 hours per week with 4 hours of overlap with PST. (We have 2 options of time commitment: 30 hrs/week or 40 hrs/week)
  • Engagement type: Contractor

Evaluation Process -

  • Shortlisted candidates will be sent a Job Interest Form.
  • After the profile review, an assessment will be shared, which must be completed within 24 hours.
  • Based on the assessment outcomes, shortlisted candidates will be contacted to discuss the pre‑onboarding requirements.

Ricevi avvisi di lavoro personalizzati

💰 273 lavori ben retribuiti

Niente spam, mai
Annulla l'iscrizione quando vuoi
Lavori dalle migliori piattaforme