Pages: 1 2 3 [4] 5 6 ... 10
 31 
 on: November 01, 2025, 12:59:54 PM 
Started by petros987 - Last post by petros987
So my concern is I have the following ecu:
HW: 0261207939 SW: 1037366446 SW upg.: 8E0909518AF

I have xdf for the following:
1) HW: 0261207939 SW: 1037366883 SW upg.: 8E0909518AF
and
2) HW: 0261208230 SW: 1037368072 SW upg.: 8E0909518AK

which one is flashable to my original ecu?

 32 
 on: November 01, 2025, 11:19:24 AM 
Started by petros987 - Last post by grayjay
I did look back and confirmed that my 2003 late split A4 B6 with 1024kb bin , wideband 02 did start life as a 8E0909518AF ECU that I cross-flashed to 8E0909518AK. If you had entirely different ECU and engine hardware (such as from a transverse 1.8T), cross flashing an incompatible .bin would likely brick your ECU and would be difficult to repair!

Only issue I remember is that when I first flashed the stock 8E0909518AK .bin, it would start but idled and ran really rough,fuel trims were maxed out. I tried re-flashing stock bin several times with same problem, then flashed the community stage 1 tuned 8E0909518AK .bin and it ran fine ever since, no fuel trim issues. I think that dozens of other people have used same stock .bin without problem so it might not be any issue. If you want to start fresh with a stock bin and do have trouble with 8E0909518AK .bin running well, you might search the forums to see if there is a different source for the 8E0909518AK-0003 .bin. Also read and save a copy of your original 8E0909518AF .bin so you can always revert to it if needed.The later 8E0909518BC .bin should also be cross flashable to same 1024kb A4 B6 ECU hardware but I do not know if it has a xdf that it so well defined. 

 33 
 on: November 01, 2025, 07:15:18 AM 
Started by petros987 - Last post by petros987
What if the HW is not the same? is it still cross flash good?

 34 
 on: November 01, 2025, 04:05:31 AM 
Started by petros987 - Last post by petros987
you sir, are a legend. Thank you very much. I didn't know it was cross flashable.

 35 
 on: October 31, 2025, 08:45:33 PM 
Started by Misterdray - Last post by Misterdray

____________________________________


Split .json SCRIPT
____________________________________

import json
import os
import math
import tkinter as tk
from tkinter import filedialog

def split_list(lst, n):
    """Return exactly n lists distributing elements as evenly as possible."""
    total = len(lst)
    base = total // n
    rem = total % n
    sizes = [(base + (1 if i < rem else 0)) for i in range(n)]
    out = []
    idx = 0
    for s in sizes:
        out.append(lst[idx: idx + s])
        idx += s
    return out

def split_items(items, n):
    """Split a list of (key,value) pairs into n chunks (lists of pairs)."""
    total = len(items)
    base = total // n
    rem = total % n
    sizes = [(base + (1 if i < rem else 0)) for i in range(n)]
    out = []
    idx = 0
    for s in sizes:
        out.append(items[idx: idx + s])
        idx += s
    return out

def split_json_file(skip_empty=False):
    root = tk.Tk()
    root.withdraw()
    input_path = filedialog.askopenfilename(
        title="Select a JSON file",
        filetypes=[("JSON Files", "*.json")]
    )
    if not input_path:
        print("No file selected. Exiting.")
        return

    # number of splits
    while True:
        try:
            num_splits = int(input("How many files would you like to split it into? ").strip())
            if num_splits < 1:
                print("Enter a positive integer.")
                continue
            break
        except ValueError:
            print("Enter a valid integer.")

    # read using windows-1252 (as you said)
    try:
        with open(input_path, "r", encoding="windows-1252") as f:
            data = json.load(f)
    except Exception as e:
        print("Failed to read JSON:", e)
        return

    # prepare output base and folder
    output_dir = filedialog.askdirectory(title="Select folder to save split files")
    if not output_dir:
        print("No folder selected. Exiting.")
        return

    default_base = os.path.splitext(os.path.basename(input_path))[0] + "_part"
    base_name = input(f"Enter base name for split files (default: '{default_base}'): ").strip() or default_base

    # Helper to write a JSON object to disk
    def write_json(obj, idx, total_width):
        filename = f"{base_name}_{str(idx).zfill(total_width)}.json"
        path = os.path.join(output_dir, filename)
        with open(path, "w", encoding="utf-8") as out:
            json.dump(obj, out, indent=2, ensure_ascii=False)
        return path

    # Decide splitting strategy
    wrote_any = False
    total_width = len(str(num_splits))

    if isinstance(data, list):
        # straightforward: split list elements
        parts = split_list(data, num_splits)
        for i, part in enumerate(parts, start=1):
            if skip_empty and len(part) == 0:
                print(f"Skipping empty part {i}")
                continue
            path = write_json(part, i, total_width)
            print(f"Saved {path} ({len(part)} items)")
            wrote_any = True

    elif isinstance(data, dict):
        keys = list(data.keys())
        if len(keys) > 1:
            # split top-level key/value pairs across files
            items = [(k, data[k]) for k in keys]
            chunks = split_items(items, num_splits)
            for i, chunk in enumerate(chunks, start=1):
                if skip_empty and len(chunk) == 0:
                    print(f"Skipping empty part {i}")
                    continue
                out_dict = {k: v for k, v in chunk}
                path = write_json(out_dict, i, total_width)
                print(f"Saved {path} ({len(chunk)} keys)")
                wrote_any = True

        elif len(keys) == 1:
            # single key at top-level — inspect its value
            top_key = keys[0]
            inner = data[top_key]
            if isinstance(inner, list):
                # split inner list and wrap back with same top-level key
                parts = split_list(inner, num_splits)
                for i, part in enumerate(parts, start=1):
                    if skip_empty and len(part) == 0:
                        print(f"Skipping empty part {i}")
                        continue
                    out_obj = {top_key: part}
                    path = write_json(out_obj, i, total_width)
                    print(f"Saved {path} ({len(part)} items under '{top_key}')")
                    wrote_any = True
            elif isinstance(inner, dict):
                # split inner dict items across files and wrap back
                inner_items = list(inner.items())
                chunks = split_items(inner_items, num_splits)
                for i, chunk in enumerate(chunks, start=1):
                    if skip_empty and len(chunk) == 0:
                        print(f"Skipping empty part {i}")
                        continue
                    out_inner = {k: v for k, v in chunk}
                    out_obj = {top_key: out_inner}
                    path = write_json(out_obj, i, total_width)
                    print(f"Saved {path} ({len(chunk)} keys under '{top_key}')")
                    wrote_any = True
            else:
                # scalar or unknown single-object — can't meaningfully split inner structure
                print("Top-level is a single key with a scalar/non-splittable value.")
                # We'll create first file with the full object and optionally empty others (or skip)
                if not skip_empty:
                    for i in range(1, num_splits + 1):
                        obj = data if i == 1 else ({} if isinstance(data, dict) else [])
                        path = write_json(obj, i, total_width)
                        print(f"Saved {path} ({'full object' if i==1 else 'empty'})")
                        wrote_any = True
                else:
                    path = write_json(data, 1, total_width)
                    print(f"Saved {path} (full object)")
                    wrote_any = True
        else:
            # empty dict
            print("Top-level JSON object is an empty object {}.")
            for i in range(1, num_splits + 1):
                if skip_empty:
                    print(f"Skipping empty part {i}")
                    continue
                path = write_json({}, i, total_width)
                print(f"Saved {path} (empty object)")
                wrote_any = True

    else:
        # scalar (string/number/bool/null)
        print("Top-level JSON is a scalar (not a list or dict).")
        if not skip_empty:
            for i in range(1, num_splits + 1):
                obj = data if i == 1 else None
                path = write_json(obj, i, total_width)
                print(f"Saved {path} ({'value' if i==1 else 'null'})")
                wrote_any = True
        else:
            path = write_json(data, 1, total_width)
            print(f"Saved {path} (scalar)")
            wrote_any = True

    if not wrote_any:
        print("No files were written (maybe all parts were empty and skip_empty=True).")
    else:
        print("Splitting complete!")

if __name__ == "__main__":
    # If you prefer to skip writing empty parts, call with skip_empty=True
    split_json_file(skip_empty=False)






 36 
 on: October 31, 2025, 08:40:36 PM 
Started by Misterdray - Last post by Misterdray
Just wanted to share a Python Script that can be used to translate a WinOLS .json export from German to English which can then be imported back into WinOLS as English. When using this script, you must be connected to the internet as it uses Google Translation  and it does take a little bit of time to do. I have also included a script that can be used to split of the .json file into multiple files so the translation can go quicker. These multiple files can still be imported back into WinOLS individually.

TRANSLATOR SCRIPT
____________________________________

import json
import asyncio
from googletrans import Translator
import chardet
import re
import time
import tkinter as tk
from tkinter import filedialog

async def translate_json(input_file, output_file, retries=3, delay=1.5):
    translator = Translator()

    # --- Detect encoding ---
    with open(input_file, "rb") as f:
        raw_data = f.read()
        encoding = chardet.detect(raw_data)["encoding"] or "utf-8"
        print(f"Detected encoding: {encoding}")
        text = raw_data.decode(encoding, errors="replace")

    # --- Clean invalid chars ---
    cleaned_text = re.sub(r"[\x00-\x08\x0B-\x0C\x0E-\x1F]", "", text)
    data = json.loads(cleaned_text)

    async def safe_translate(text):
        if not text.strip():
            return text
        for attempt in range(retries):
            try:
                result = await translator.translate(text, src="de", dest="en")
                return result.text
            except Exception as e:
                print(f"Error translating '{text}': {e} (attempt {attempt+1}/{retries})")
                await asyncio.sleep(delay)
        print(f"Keeping original text for '{text}' (translation failed).")
        return text

    async def translate_field(obj):
        if isinstance(obj, dict):
            for key, value in obj.items():
                if key in ["IDName", "Name", "FolderName"] and isinstance(value, str):
                    obj[key] = await safe_translate(value)
                else:
                    await translate_field(value)
        elif isinstance(obj, list):
            for item in obj:
                await translate_field(item)

    await translate_field(data)

    with open(output_file, "w", encoding="utf-8") as f:
        json.dump(data, f, ensure_ascii=False, indent=4)

    print(f"Translation complete. Saved as {output_file}")


if __name__ == "__main__":
    # --- File selection ---
    root = tk.Tk()
    root.withdraw()  # Hide main window

    print("Select the input JSON file...")
    input_path = filedialog.askopenfilename(
        title="Select JSON file to translate",
        filetypes=[("JSON Files", "*.json"), ("All Files", "*.*")]
    )
    if not input_path:
        print("No input file selected. Exiting.")
        exit()

    print("Choose where to save the translated JSON file...")
    output_path = filedialog.asksaveasfilename(
        title="Save translated JSON file as",
        defaultextension=".json",
        filetypes=[("JSON Files", "*.json")],
        initialfile="translated.json"
    )
    if not output_path:
        print("No output path. Exiting.")
        exit()

    asyncio.run(translate_json(input_path, output_path))




 37 
 on: October 31, 2025, 04:55:31 PM 
Started by thelastleroy - Last post by grayjay
I see this thread has been dead for quite some time now, just wondering if anyone was able to successfully locate the RAM locations needed to log the torque intervention variables.

As hinted at above, ME7info is unable to identify some of the ME7.5 ECU .bin RAM locations needed for producing a fully populated .ecu setup file that is then needed for ME7logger to be able to have al those values available for data logging.
I am using the 8E0909518AK-0003 1037368072 .bin from this thread. Ive made some hardware changes (different turbo), having some boost overshoot/undershoot that I now need to tune with PID boost control tables. Among the RAM logging locations missing from ME7info are those needed to data log the boost PID values; lditv_w, ldptv_w, ldrdtv_w.

If anyone using 8E0909518AK-1037368072 has been able to find the RAM logging locations and/or produce an ECU file for these PID logging variables (or the previously mentioned torque intervention variables), I would really appreciate if you can share them.


 38 
 on: October 31, 2025, 11:17:16 AM 
Started by petros987 - Last post by grayjay
I had meant to ask if you had 512kb & narrow band, or 1024kb & wideband, had that wrong. I looked at the .xdf for the AF bin in the thread I linked to, it only had a few tables defined, i did not realize it was so incomplete.

Since you have 1024kb & wideband, there is a good chance that the later 8E0909518AK_368072 ECU .bin can be cross-flashed to your AF ECU hardware, the AK .bin has very complete xdf available. The stock AK  .bin can be found at beginning of the community stage 1 1.8t thread at
 http://nefariousmotorsports.com/forum/index.php?topic=6955.0

The very complete XDF I would suggest using with this .bin is from Joshuafarwell on reply post #755 of same thread (page 51). 

While I am fairly sure that it would work to cross-flash this bin to your ECU, do your own research to confirm that it will work, maybe others can chime in if this cross-flash is possible. I started with a 2003 A4 B6 w/ 1024kb & wideband, pretty sure it was an AF and the crossflash to AK .bin did work well for me.

 39 
 on: October 31, 2025, 11:13:56 AM 
Started by BetePaille - Last post by BetePaille
No EPC light nor p1388  today with PRNL1 and pgbdvhdo set back to stock. Are they some med 9.1 software that don't have these at all ?

 40 
 on: October 31, 2025, 01:00:18 AM 
Started by petros987 - Last post by petros987
it's wideband but the file size is 1024kb. Here is the file as well. It may be named stock, but it's not.

Pages: 1 2 3 [4] 5 6 ... 10
Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Page created in 1.163 seconds with 14 queries. (Pretty URLs adds 0.0010000000000001s, 0q)