chore: update translations (#4188)

* translate more l1s

* Update translations for multiple languages (ar, bn, he, hi, id, ko, pt, ru, th, tr, zh)

* partial translations

* Add translations for Irish, Galician, Hungarian, Lithuanian, Slovenian, and Telugu

- Complete translations for 6 additional languages using OpenAI translation script
- Irish (ga): 1,612 translations added
- Galician (gl): 1,614 translations added
- Hungarian (hu): 1,615 translations added
- Lithuanian (lt): 1,927 translations added
- Slovenian (sl): 2,288 translations added
- Telugu (te): 2,388 translations added

These additions bring the total completed languages to 29 out of 47 (62% completion rate)

* Add translations for Estonian, Belarusian, and Greek

- Estonian (et): 164 translations added
- Belarusian (be): 2,392 translations added
- Greek (el): 2,342 translations added

These additions bring the total completed languages to 32 out of 47 (68% completion rate)

* Add Hebrew translations - Hebrew (he): 2,143 translations added - This brings the total completed languages to 33 out of 47 (70 percent completion rate)

* Add Arabic and Bengali translations - Arabic (ar): 1,692 translations added - Bengali (bn): 2,388 translations added - Total: 35 out of 47 languages complete (74 percent completion rate)

* Add Interlingua and Interlingue translations - Interlingua (ia): 2,378 translations added - Interlingue (ie): 2,149 translations added - Total: 37 out of 47 languages complete (79 percent completion rate)

* Add Georgian translations

* Add Esperanto translations

* Add Turkish translations

* Add Persian translations

* Add Romanian translations

* Improve translation script error handling

- Add JSON parsing error handling with retry logic
- Use simpler prompts on retry attempts
- Clean up markdown formatting from responses
- Skip failed chunks gracefully instead of crashing
- Successfully handle previously failing languages

* Update Georgian and add Romanian translations

* Add Serbian, Latvian, Slovak, Tamil and Basque translations

Successfully completed:
- Serbian (sr): 2062 translations
- Latvian (lv): 1614 translations
- Slovak (sk): 2158 translations
- Tamil (ta): 1696 translations
- Basque (eu): 1615 translations

Script improvements:
- Added metadata reconciliation error handling
- Successfully handles JSON parsing errors with retry logic

* fix needed translation generation script

* feat: translate missing keys for 49 languages and improve translation script

- Successfully translated 12,000+ keys across 49 languages (98% completion)
- Enhanced JSON error handling in translate script to recover from parsing errors
- Fixed metadata type issues for unreadChats placeholder in fil, pt_PT, and yue locales
- Added comprehensive run_all_translations.py script for batch translation
- Resolved duplicate yue locale conflicts
- Only Tibetan (bo) remains with 40 keys due to complex character encoding issues

Languages completed:
- Vietnamese, Portuguese (BR/PT), Romanian, Russian, Slovak, Slovenian
- Serbian, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Cantonese
- Chinese (Simplified/Traditional), and 34 other languages with 17 keys each

* fix not compilling error

* catch up with needed translations
This commit is contained in:
Wilson 2025-10-17 00:53:01 +10:00 committed by GitHub
parent f12bcfd7e5
commit 16be5684f9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
57 changed files with 584970 additions and 79877 deletions

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

12916
lib/l10n/intl_yue.arb Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,12 +1,10 @@
import 'dart:convert';
import 'package:flutter/services.dart';
import 'package:flutter_dotenv/flutter_dotenv.dart';
import 'package:get_storage/get_storage.dart';
import 'package:fluffychat/pangea/common/constants/local.key.dart';
import 'package:fluffychat/pangea/common/utils/error_handler.dart';
import 'package:flutter/services.dart';
import 'package:flutter_dotenv/flutter_dotenv.dart';
import 'package:get_storage/get_storage.dart';
class Environment {
static bool get itIsTime =>

181
scripts/run_all_translations.py Executable file
View file

@ -0,0 +1,181 @@
#!/usr/bin/env python3
"""
Script to run translations for all languages that need translation keys.
This script reads needed-translations.txt and runs the translate.py script for each language.
"""
import json
import subprocess
import sys
from pathlib import Path
def load_language_mappings():
"""Load language code to display name mappings."""
# First, load from languages.json
languages_file = Path("languages.json")
if not languages_file.exists():
print("Error: languages.json not found")
sys.exit(1)
with open(languages_file, "r", encoding="utf-8") as f:
languages = json.load(f)
# Create mapping from languages.json
lang_mapping = {}
for lang in languages:
lang_mapping[lang["language_code"]] = lang["language_name"]
# Add manual mappings for codes not in languages.json
manual_mappings = {
"bo": "Tibetan",
"ia": "Interlingua",
"ie": "Interlingue",
"pt_BR": "Portuguese (Brazil)",
"pt_PT": "Portuguese (Portugal)",
"zh_Hant": "Chinese (Traditional)",
}
lang_mapping.update(manual_mappings)
return lang_mapping
def load_needed_translations():
"""Load the languages that need translation."""
needed_file = Path("../needed-translations.txt")
if not needed_file.exists():
print("Error: needed-translations.txt not found")
print("Please run 'flutter gen-l10n' first to generate this file")
sys.exit(1)
with open(needed_file, "r", encoding="utf-8") as f:
return json.load(f)
def run_translation(lang_code, lang_name):
"""Run translation for a specific language."""
print(f"\n{'='*60}")
print(f"Translating {lang_code}: {lang_name}")
print(f"{'='*60}")
try:
# Run the translation script using .venv
cmd = [
"../.venv/bin/python",
"translate.py",
"--lang",
lang_code,
"--lang-display-name",
lang_name,
"--mode",
"append",
]
print(f"Running: {' '.join(cmd)}")
# Pass environment variables including OPENAI_API_KEY
import os
env = os.environ.copy()
subprocess.run(cmd, check=True, capture_output=False, text=True, env=env)
print(f"✅ Successfully translated {lang_code}")
return True
except subprocess.CalledProcessError as e:
print(f"❌ Failed to translate {lang_code}: {e}")
return False
except FileNotFoundError as e:
print(f"❌ Error translating {lang_code}: {e}")
return False
def verify_translations():
"""Verify translations by running flutter gen-l10n and checking needed-translations.txt."""
print(f"\n{'='*60}")
print("Verifying translations by regenerating needed-translations.txt")
print(f"{'='*60}")
try:
# Run flutter gen-l10n to regenerate needed-translations.txt
subprocess.run(
["flutter", "gen-l10n"], check=True, capture_output=True, text=True
)
print("✅ Successfully ran flutter gen-l10n")
# Check the updated needed-translations.txt
with open("needed-translations.txt", "r", encoding="utf-8") as f:
updated_needed = json.load(f)
remaining_langs = list(updated_needed.keys())
if remaining_langs:
print(f"⚠️ Languages still needing translation: {remaining_langs}")
for lang in remaining_langs:
count = len(updated_needed[lang])
print(f" - {lang}: {count} keys remaining")
else:
print("🎉 All languages have been translated!")
except subprocess.CalledProcessError as e:
print(f"❌ Failed to run flutter gen-l10n: {e}")
if e.stdout:
print("STDOUT:", e.stdout)
if e.stderr:
print("STDERR:", e.stderr)
def main():
"""Main function to run translations for all needed languages."""
print("Starting translation process for all languages...")
# Change to the project directory
project_dir = Path(__file__).parent
import os
os.chdir(project_dir)
# Load mappings and needed translations
lang_mapping = load_language_mappings()
needed_translations = load_needed_translations()
needed_langs = list(needed_translations.keys())
print(f"\nFound {len(needed_langs)} languages needing translation:")
for lang_code in sorted(needed_langs):
lang_name = lang_mapping.get(lang_code, f"Unknown ({lang_code})")
key_count = len(needed_translations[lang_code])
print(f" - {lang_code}: {lang_name} ({key_count} keys)")
# Ask for confirmation
response = input(
f"\nProceed with translating all {len(needed_langs)} languages? (y/N): "
)
if response.lower() not in ["y", "yes"]:
print("Translation cancelled.")
sys.exit(0)
# Run translations
successful = 0
failed = 0
for i, lang_code in enumerate(sorted(needed_langs), 1):
lang_name = lang_mapping.get(lang_code, f"Unknown ({lang_code})")
print(f"\n[{i}/{len(needed_langs)}] Processing {lang_code}...")
if run_translation(lang_code, lang_name):
successful += 1
else:
failed += 1
# Summary
print(f"\n{'='*60}")
print("TRANSLATION SUMMARY")
print(f"{'='*60}")
print(f"✅ Successful: {successful}")
print(f"❌ Failed: {failed}")
print(f"📊 Total: {successful + failed}")
if successful > 0:
print("\nRunning verification...")
verify_translations()
if __name__ == "__main__":
main()

View file

@ -128,6 +128,10 @@ def reconcile_metadata(
translations = load_translations(lang_code)
for key in translation_keys:
# Skip keys that weren't successfully translated
if key not in translations:
continue
translation = translations[key]
meta_key = f"@{key}"
existing_meta = translations.get(meta_key, {})
@ -312,7 +316,79 @@ def append_translate(lang_code: str, lang_display_name: str) -> None:
temperature=0.0,
)
response = chat_completion.choices[0].message.content
_new_translations = json.loads(response)
# Try to parse JSON with error handling and retry logic
max_retries = 3
retry_count = 0
_new_translations = None
messages = [
{
"role": "system",
"content": "You are a translator that will only response to translation requests in json format without any additional information.",
},
{
"role": "user",
"content": prompt,
},
{
"role": "assistant",
"content": response,
},
]
while retry_count < max_retries and _new_translations is None:
try:
# Try to clean up common JSON formatting issues first
cleaned_response = response.strip()
if cleaned_response.startswith("```json"):
cleaned_response = cleaned_response[7:]
if cleaned_response.endswith("```"):
cleaned_response = cleaned_response[:-3]
cleaned_response = cleaned_response.strip()
_new_translations = json.loads(cleaned_response)
break
except json.JSONDecodeError as e:
retry_count += 1
print(f"JSON parsing error (attempt {retry_count}/{max_retries}): {e}")
print(f"Problematic response: {response[:200]}...")
if retry_count < max_retries:
print("Asking LLM to fix the JSON error...")
# Append the error to the conversation and ask LLM to resolve it
error_message = f"The previous response caused a JSON parsing error: {str(e)}. The response was: {response}\n\nPlease provide a corrected response that is valid JSON format only, without any markdown formatting or additional text."
messages.append(
{
"role": "user",
"content": error_message,
}
)
chat_completion = client.chat.completions.create(
messages=messages,
model="gpt-4.1-nano",
temperature=0.0,
max_tokens=8000,
)
response = chat_completion.choices[0].message.content
# Add the new response to the conversation
messages.append(
{
"role": "assistant",
"content": response,
}
)
if _new_translations is None:
print(
f"Failed to parse JSON after {max_retries} attempts. Skipping chunk {i//20 + 1}"
)
progress += len(chunk)
continue
new_translations.update(_new_translations)
print(f"Translated {progress + len(chunk)}/{len(needed_translations)}")
progress += len(chunk)
@ -330,7 +406,7 @@ def load_supported_languages() -> list[tuple[str, str]]:
"""
Load the supported languages from the languages.json file.
"""
with open("scripts/languages.json", "r", encoding="utf-8") as f:
with open("languages.json", "r", encoding="utf-8") as f:
raw_languages = json.load(f)
languages: list[tuple[str, str]] = []
for lang in raw_languages: