More Awesome Than You!
Welcome, Guest. Please login or register.
2024 November 21, 18:01:41

Login with username, password and session length
Search:     Advanced search
540287 Posts in 18067 Topics by 6545 Members
Latest Member: cincinancy
* Home Help Search Login Register
  Show Posts
Pages: [1]
1  The Bowels of Trogdor / The Small Intestines of Trogdor / Re: Modifying compression for Sim City 4 on: 2011 January 31, 18:56:07
What you just gave me was written in C++. I also highly suspect that it was coded for 'The Sims 2'.
RefPak/QFS is the compression algorithm used by 'Sim City 4', 'The Sims 2', 'Spore' and 'The Sims 3' as explained in the original post (it is actually an internal EA algorithm, as it appears in some 'Need for Speed' and 'Command and Conquer' games too).

My code is written in C#. Honestly, why are you complaining? It is almost like straight C.
Would it be better if I posted the original Java code?

Just to emphasize the original problem: I'm looking for someone to help me recode my code so that it compresses in a way understood by 'Sim City 4', not 'The Sims 2'.
2  The Bowels of Trogdor / The Small Intestines of Trogdor / Re: Modifying compression for Sim City 4 on: 2011 January 31, 17:44:27
This is RefPak/QFS compression, used by Sim City 4, The Sims 2, Spore and The Sims 3. Although the one used for Spore and The Sims 3 is modified, AFAIK.

I found out the first thing I had to change to make it comply with Sim City 4 were these values:

Code:
                    // some Compression Data
                    //const int MAX_OFFSET = 0x20000;
                    const int MAX_OFFSET = 65535;
                    //const int MAX_COPY_COUNT = 0x404;
                    const int MAX_COPY_COUNT = 2047;

But that's not enough. You have to change the compression code itself, too. Sad

3  The Bowels of Trogdor / The Small Intestines of Trogdor / Modifying compression for Sim City 4 on: 2011 January 31, 16:09:34
Hey guys!
I have some compression code I ported from Java. I think it is meant to compress in a manner like 'The Sims 2' does it, but I want it to compress like 'Sim City 4'. I know there are some subtle changes to the algorithm, but I'm not really that familiar with the algorithm. Sad
Could anyone help me out?

Code:
        /// <summary>
        /// Copies data from source to destination array.<br>
    /// The copy is byte by byte from srcPos to destPos and given length.
        /// </summary>
        /// <param name="Src">The source array.</param>
        /// <param name="SrcPos">The source Position.</param>
        /// <param name="Dest">The destination array.</param>
        /// <param name="DestPos">The destination Position.</param>
        /// <param name="Length">The length.</param>
        private void ArrayCopy2(byte[] Src, int SrcPos, ref byte[] Dest, int DestPos, long Length)
        {
            if (Dest.Length < DestPos + Length)
            {
                byte[] DestExt = new byte[(int)(DestPos + Length)];
                Array.Copy(Dest, 0, DestExt, 0, Dest.Length);
                Dest = DestExt;
            }

            for (int i = 0; i < Length; i++)
            {
                if (SrcPos == Src.Length || (SrcPos + i) == Src.Length)
                    break;

                Dest[DestPos + i] = Src[SrcPos + i];
            }
        }

        /// <summary>
        /// Copies data from array at destPos-srcPos to array at destPos.
        /// </summary>
        /// <param name="array">The array.</param>
        /// <param name="srcPos">The Position to copy from (reverse from end of array!)</param>
        /// <param name="destPos">The Position to copy to.</param>
        /// <param name="length">The length of data to copy.</param>
        private void OffsetCopy(ref byte[] array, int srcPos, int destPos, long length)
        {
            srcPos = destPos - srcPos;

            if (array.Length < destPos + length)
            {
                byte[] NewArray = new byte[(int)(destPos + length)];
                Array.Copy(array, 0, NewArray, 0, array.Length);
                array = NewArray;
            }

            for (int i = 0; i < length; i++)
            {
                array[destPos + i] = array[srcPos + i];
            }
        }

        /// <summary>
        /// Writes a uint to a binary array.
        /// </summary>
        /// <param name="Data">The binary array.</param>
        /// <param name="Value">The uint value to write.</param>
        /// <param name="Position">The position to write to within the array.</param>
        private void WriteUInt(ref byte[] Data, uint Value, long Position)
        {
            MemoryStream MemStream = new MemoryStream(Data);
            BinaryWriter Writer = new BinaryWriter(MemStream);
           
            Writer.BaseStream.Seek(Position, SeekOrigin.Begin);
            Writer.Write(Value);
            Writer.Flush();

            Data = MemStream.ToArray();
            Writer.Close();
        }

        /// <summary>
        /// Writes a ushort to a binary array.
        /// </summary>
        /// <param name="Data">The binary array.</param>
        /// <param name="Value">The ushort value to write.</param>
        /// <param name="Position">The position to write to within the array.</param>
        private void WriteUShort(ref byte[] Data, ushort Value, long Position)
        {
            MemoryStream MemStream = new MemoryStream(Data);
            BinaryWriter Writer = new BinaryWriter(MemStream);

            Writer.BaseStream.Seek(Position, SeekOrigin.Begin);
            Writer.Write(Value);
            Writer.Flush();

            Data = MemStream.ToArray();
            Writer.Close();
        }

        private void WriteReversedArray(ref byte[] Data, byte[] Ar, long Position)
        {
            MemoryStream MemStream = new MemoryStream(Data);
            BinaryWriter Writer = new BinaryWriter(MemStream);

            Writer.BaseStream.Seek(Position, SeekOrigin.Begin);
            Array.Reverse(Ar);
            Writer.Write(Ar);
            Writer.Flush();

            Data = MemStream.ToArray();
            Writer.Close();
        }

        /// <summary>
        /// Writes the first 9 bytes of the RefPak header to the supplied
        /// array of compressed data.
        /// </summary>
        /// <param name="Data">The array to write to.</param>
        /// <param name="DecompressedSize">The decompressed size of the data.</param>
        /// <param name="CompressedSize">The compressed size of the data. Does NOT include header size.</param>
        private void WriteFirstHeader(ref byte[] Data, uint DecompressedSize, uint CompressedSize)
        {
            MemoryStream MemStream = new MemoryStream(Data);
            BinaryWriter Writer = new BinaryWriter(MemStream);

            Writer.Write((byte)0x01);       //Indicates this data is compressed.

            byte[] Decompressed = new byte[3];
            Decompressed = BitConverter.GetBytes(DecompressedSize);
           
            Writer.Write(Decompressed);
            Writer.Write((byte)0x00);       //Out Of Bounds character.
            Writer.Write(CompressedSize);   //Stream body size. Does NOT include size of RefPak header.
            Writer.Flush();

            Data = MemStream.ToArray();
            Writer.Close();
        }

        /// <summary>
        /// Gets a ushort from a binary array.
        /// </summary>
        /// <param name="Data">The binary array.</param>
        /// <returns>The ushort.</returns>
        ushort GetUShort(byte[] Data)
        {
            ushort Value;

            MemoryStream MemStream = new MemoryStream(Data);
            BinaryReader Reader = new BinaryReader(MemStream);

            Value = Reader.ReadUInt16();

            Reader.Close();

            return Value;
        }

        /// <summary>
        /// Compress the decompressed data.
        /// </summary>
        /// <param name="dData">The decompressed data.</param>
        /// <returns>The compressed data.</returns>
    public byte[] Compress(byte[] dData)
        {
    // if data is big enough for compress
            if (dData.Length > 6)
            {
                // check, if data already compressed
                uint signature = GetUShort(dData);//(uint)ToValue(dData, 0x04, 2, false);

                if (signature != m_MAGICNUMBER_QFS)
                {
                    // some Compression Data
                    const int MAX_OFFSET = 0x20000;
                    const int MAX_COPY_COUNT = 0x404;
                    // used to finetune the lookup (small values increase the
                    // compression for Big Files)
                    const int QFS_MAXITER = 0x80;

                    // contains the latest offset for a combination of two
                    // characters
                    Dictionary<int, ArrayList> CmpMap2 = new Dictionary<int, ArrayList>();

                    // will contain the compressed data (maximal size =
                    // uncompressedSize+MAX_COPY_COUNT)
                    byte[] cData = new byte[dData.Length + MAX_COPY_COUNT];

                    // init some vars
                    int writeIndex = 0;  // Header for FAR3 is twice as long as for DBPF
                    int lastReadIndex = 0;
                    ArrayList indexList = null;
                    int copyOffset = 0;
                    int copyCount = 0;
                    int index = -1;
                    bool end = false;

                    // begin main compression loop
                    while (index < dData.Length - 3)
                    {
                        // get all Compression Candidates (list of offsets for all
                        // occurances of the current 3 bytes)
                        do
                        {
                            index++;

                            if (index == dData.Length - 2)
                            {
                                end = true;
                                break;
                            }
                            int mapindex = (dData[index] + (dData[index + 1] << 8) + (dData[index + 2] << 16));

                            try
                            {
                                indexList = CmpMap2[mapindex];
                            }
                            catch (Exception)
                            {
                                if (indexList == null)
                                {
                                    indexList = new ArrayList();
                                    CmpMap2.Add(mapindex, indexList);
                                }
                            }

                            indexList.Add(index);
                        } while (index < lastReadIndex);

                        if (end)
                        {
                            break;
                        }

                        // find the longest repeating byte sequence in the index
                        // List (for offset copy)
                        int offsetCopyCount = 0;
                        int loopcount = 1;

                        while ((loopcount < indexList.Count) && (loopcount < QFS_MAXITER))
                        {
                            int foundindex = (int)indexList[(indexList.Count - 1) - loopcount];
                           
                            if ((index - foundindex) >= MAX_OFFSET)
                                break;
                           
                            loopcount++;
                            copyCount = 3;
                            while ((dData.Length > index + copyCount) && (dData[index + copyCount]
                                == dData[foundindex + copyCount]) && (copyCount < MAX_COPY_COUNT))
                            {
                                copyCount++;
                            }

                            if (copyCount > offsetCopyCount)
                            {
                                offsetCopyCount = copyCount;
                                copyOffset = index - foundindex;
                            }
                        }

                        // check if we can compress this
                        // In FSH Tool stand additionally this:
                        if (offsetCopyCount > dData.Length - index)
                        {
                            offsetCopyCount = index - dData.Length;
                        }
                        if (offsetCopyCount <= 2)
                        {
                            offsetCopyCount = 0;
                        }
                        else if ((offsetCopyCount == 3) && (copyOffset > 0x400))
                        { // 1024
                            offsetCopyCount = 0;
                        }
                        else if ((offsetCopyCount == 4) && (copyOffset > 0x4000))
                        { // 16384
                            offsetCopyCount = 0;
                        }

                        // this is offset-compressable? so do the compression
    if (offsetCopyCount > 0)
                        {
    // plaincopy

    // In FSH Tool stand this (A):
    while (index - lastReadIndex >= 4)
                            {
    copyCount = (index - lastReadIndex) / 4 - 1;
   
                                if (copyCount > 0x1B)
    copyCount = 0x1B;

    cData[writeIndex++] = (byte) (0xE0 + copyCount);
    copyCount = 4 * copyCount + 4;

    ArrayCopy2(dData, lastReadIndex, ref cData, writeIndex, copyCount);
    lastReadIndex += copyCount;
    writeIndex += copyCount;
    }
    // while ((index - lastReadIndex) > 3) {
    // copyCount = (index - lastReadIndex);
    // while (copyCount > 0x71) {
    // copyCount -= 0x71;
    // }
    // copyCount = copyCount & 0xfc;
    // int realCopyCount = (copyCount >> 2);
    // cData[writeIndex++] = (short) (0xdf + realCopyCount);
    // arrayCopy2(dData, lastReadIndex, cData, writeIndex,
    // copyCount);
    // writeIndex += copyCount;
    // lastReadIndex += copyCount;
    //
                        }

                        // offsetcopy
                        copyCount = index - lastReadIndex;
                        copyOffset--;

                        if ((offsetCopyCount <= 0x0A) && (copyOffset < 0x400))
                        {
                            cData[writeIndex++] = (byte)(((copyOffset >> 8) << 5)
                                    + ((offsetCopyCount - 3) << 2) + copyCount);
                            cData[writeIndex++] = (byte)(copyOffset & 0xff);
                        }
                        else if ((offsetCopyCount <= 0x43) && (copyOffset < 0x4000))
                        {
                            cData[writeIndex++] = (byte)(0x80 + (offsetCopyCount - 4));
                            cData[writeIndex++] = (byte)((copyCount << 6) + (copyOffset >> 8));
                            cData[writeIndex++] = (byte)(copyOffset & 0xff);
                        }
                        else if ((offsetCopyCount <= MAX_COPY_COUNT) && (copyOffset < MAX_OFFSET))
                        {
                            cData[writeIndex++] = (byte)(0xc0 + ((copyOffset >> 16) << 4)
                                + (((offsetCopyCount - 5) >> 8) << 2) + copyCount);
                            cData[writeIndex++] = (byte)((copyOffset >> 8) & 0xff);
                            cData[writeIndex++] = (byte)(copyOffset & 0xff);
                            cData[writeIndex++] = (byte)((offsetCopyCount - 5) & 0xff);
                        }
                        // else {
                        // copyCount = 0;
                        // offsetCopyCount = 0;
                        // }

                        // do the offset copy
                        ArrayCopy2(dData, lastReadIndex, ref cData, writeIndex, copyCount);
                        writeIndex += copyCount;
                        lastReadIndex += copyCount;
                        lastReadIndex += offsetCopyCount;
                    }

                    // add the End Record
                    index = dData.Length;
                    // in FSH Tool stand the same as above (A)
                    while (index - lastReadIndex >= 4)
                    {
                        copyCount = (index - lastReadIndex) / 4 - 1;
                       
                        if (copyCount > 0x1B)
                            copyCount = 0x1B;
                       
                        cData[writeIndex++] = (byte)(0xE0 + copyCount);
                        copyCount = 4 * copyCount + 4;

                        ArrayCopy2(dData, lastReadIndex, ref cData, writeIndex, copyCount);
                        lastReadIndex += copyCount;
                        writeIndex += copyCount;
                    }

                    // lastReadIndex = Math.min(index, lastReadIndex);
                    // while ((index - lastReadIndex) > 3) {
                    // copyCount = (index - lastReadIndex);
                    // while (copyCount > 0x71) {
                    // copyCount -= 0x71;
                    // }
                    // copyCount = copyCount & 0xfc;
                    // int realCopyCount = (copyCount >> 2);
                    // cData[writeIndex++] = (short) (0xdf + realCopyCount);
                    // arrayCopy2(dData, lastReadIndex, cData, writeIndex,
                    // copyCount);
                    // writeIndex += copyCount;
                    // lastReadIndex += copyCount;
                    // }
                    copyCount = index - lastReadIndex;
                    cData[writeIndex++] = (byte)(0xfc + copyCount);
                    ArrayCopy2(dData, lastReadIndex, ref cData, writeIndex, copyCount);
                    writeIndex += copyCount;
                    lastReadIndex += copyCount;

                    WriteFirstHeader(ref cData, (uint)dData.Length, (uint)(writeIndex));

                    // write the header for the compressed data
                    // set the compressed size
                    //ToArray(writeIndex, ref cData, 0x00, 4);
                    WriteUInt(ref cData, (uint)(writeIndex), 9);
                    m_CompressedSize = writeIndex;
                    // set the MAGICNUMBER
                    //ToArray(m_MAGICNUMBER_QFS, ref cData, 0x04, 2);
                    WriteUShort(ref cData, (ushort)m_MAGICNUMBER_QFS, 13);
                    // set the decompressed size
                    byte[] revData = new byte[3];
                    //ToArray(dData.Length, ref revData, 0x00, 3);

                    byte[] Tmp = BitConverter.GetBytes(dData.Length);
                    Buffer.BlockCopy(Tmp, 0, revData, 0, 3);

                    /*for (int j = 0; j < (revData.Length - 1); j++)
                        cData[j + 15] = revData[2 - j];*/
                    WriteReversedArray(ref cData, revData, 15);

                    this.m_DecompressedSize = dData.Length;
                    m_Compressed = false;
                   
                    if (m_CompressedSize < m_DecompressedSize)
                        m_Compressed = true;

                    byte[] retData = new byte[writeIndex + 18];
                    Array.Copy(cData, 0, retData, 0, writeIndex + 18);
                   
                    return retData;
                }
            }

            return dData;
        }

Here are the differences as stated by SimsWiki:

The Sims 2

Code:
CC length: 4 bytes
Num plain text: byte0 & 0x03
Num to copy: ( (byte0 & 0x0C) < < 6 )  + byte3 + 5
Copy offset: ((byte0 & 0x10) < < 12 ) + (byte1 < < 8 ) + byte2 + 1
Bits: 110occpp oooooooo oooooooo cccccccc
Num plain text limit: 0-3
Num to copy limit: 5-1028
Maximum Offset: 131072

Sim City 4

Code:
CC length: 4 bytes
Num plain text: byte0 & 0x03
Num to copy: ( (byte0 & 0x1C) < < 6 )  + byte3 + 5
Copy offset: (byte1 < < 8) + byte2
Bits: 110cccpp oooooooo oooooooo cccccccc
Num plain text limit: 0-3
Num to copy limit: 5-2047
Maximum Offset: 65535



4  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 July 15, 22:28:10
Ok, another update:

I seem to have been able to extract the files now, but they seem to be compressed with a type of QFS/Refpack compression that has a 18 byte header. It should be the same type of compression used on Sim City 4. Does anyone know anything about this? Usually the header should be 9 bytes long, but decompressing the files seem completely impossible using normal QFS decompression, even when modified for SC4.
I've never heard of an 18 byte QFS header before. Sad
5  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 July 12, 14:05:07
Thanks guys!
Finally got my copy of TSO in the mail, and... DBPF isn't the largest problem. In fact, it hasn't been a problem at all. TSO's DBPF archives aren't compressed. They contain directories that I haven't been able to figure out, but I'm still able to extract the files.
No, the biggest challenge as far as TSO's data is concerned is the new FAR (File ARchive) format, originally used by The Sims (1). It seems most of TSO's data is stored in those archives, and I haven't been able to figure 'em out yet. They don't seem to be substantially different from the original FAR archives, except that they seem to support compression (hopefully the same kind of RefPack compression used by SimCity 4's DBPF archives). Here is my preliminary writeup:

Code:
Version 3

The Sims Online (TSO) introduces a new version of the FAR format. This format is FAR, version 3. This format has not been completely reversed yet, so most of the details below are not set in stone.

Header

    * Signature - An eight byte string, consisting of the characters 'FAR!byAZ' (without quotes).
    * Version - A byte signifying the version of the archive. Should be 3.
    * Unknown - Three bytes of 0.
    * Manifest offset - a 4 byte integer specifying the offset to manifest of the archive (from the beginning of the file), where offsets to every entry are kept.

Manifest

    * Number of files - A 4 byte integer specifying the number of files in the archive.
    * Manifest Entries - As many manifest entries as stated by the previous integer]

Manifest entry

    * Raw Size - The uncompressed size of the filedata, stored as a UInt32 (4 bytes).
    * Compressed Size - The compressed size of the file. FAR V. 3 seems to support compression. Will be the same as the first field if the file is not compressed. This seems to be a UInt16 (2 bytes).
    * Offset - The offset of the filedata in the file. Could possibly be a UInt32, but seems to be a UInt16 (2 bytes).
    * Unknown - 17 bytes of an unknown purpose.
    * Filename - A string representing the filename of the file(data). Seems to be null-terminated.

If anyone wants to help out in documenting this format, here is the link to the uploaded archive that I'm currently working on. I also have a Wiki here that I use to document all of the original formats from the original Sims games (mostly TSO though). Feel free to use info from this Wiki as you like.
6  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 July 04, 18:49:25
According to my research (which has been extensive), The Sims Online was the first game where the DBPF format was used to store (most of?) the game's data. I actually... am fairly sure I tried to extract the data while the game was still online, but I didn't get it to work.
Now that I have a decompression routine that works, all I have to do is modify it to work with Sim City 4 (which presumably contains the earliest version of the decompression algorithm available, which was a decendant from The Sims Online), and it'll hopefully work. If not I'm going to have to open the archives in a hex viewer and see if I can get some more information.
And yes, the idea is to rewrite the client and server from scratch, but retaining compatibility for the old gamedata so it won't have to be remade. Considering TSO hasn't been online for about two years, EA aren't making and money from it and haven't for a long time, so I'm hoping they won't be offended if I re-release the client (which was, incidentally, freely available for download the last year or so the game was online under the name 'EA-Land').
7  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 July 03, 15:48:21
Thanks for letting me know!
Right now being able to compress things is not my greatest concern, as I am simply trying to gain access to the data in the The Sims Online DBPF archives. How do I have those archives?
I ordered a used version of The Sims Online off of eBay, because I had deleted EA-land off my harddrive.
I am now so sick and tired of nobody having written a server emulator for this game, I am intending so see about recreating the client.

So far I have partial support for iff files (still working on getting DGRP resources to display properly), full support for far archives and full support for DBPF archives (assuming TSO uses the same compression as Sim City 4, which... I hope it does).

And yes... I realize you need a server as well, but I do not think writing a custom server for TSO is going to be extremely hard, as it was never the most bandwidth intensive game. In fact, any game that can be run through HTTP with SSL encryption (which seems to be how most of the original protocol was implemented -- I did some packet sniffing while it was still online) is... not very bandwidth intensive at all.
8  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 June 30, 14:52:52
Heh, after digging around more in the SimPE source, I found the C# Decompression function I had been longing for.
And no, it turns out that the problem is not, in fact, that I'm coding in C#, but that the DBPF format is extremely quirky and cumbersome. I probably overlooked some details when reading specification(s), but anyways... here's what I found:

1. All files in the archive will be listed in the archive's DIR resource even if they are not compressed. At least so long as any file in the archive is compressed.
2. The uncompressed size listed for an entry in the archive's DIR resource consistantly seems to be wrong, meaning;
3. ... to find out if a file is actually compressed, you have to read it's compression header, get the uncompressed size from there, and then check if it's smaller than or the same as the [compressed] size. If it isn't, it means the file is compressed.
4. Consequently, every single file (except the DIR file, obviously) in a compressed archive will have a compression header even if the file is not compressed itself.

Note that all of the above only applies to archives that have compressed files in them, and I haven't tested my code on an uncompressed archive yet. The rules might be slightly or even totally different for said archives.
9  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 June 30, 13:18:44
That's a C++ tool and the code is generally (no offense to whoever wrote it) kinda cryptic.

Anyways... I don't know if anyone are very good with C# here, but I changed my code kinda drastically:

Code:
        /// <summary>
        /// Extracts and uncompresses, when neccessary, all the loaded entries in an archive
        /// to the specified path. Assumes that LoadArchive() has been called first.
        /// </summary>
        /// <param name="Path">The path to extract to.</param>
        public void ExtractFiles(string Path)
        {
            BinaryReader Reader = new BinaryReader(File.Open(m_ArchiveName, FileMode.Open));

            for(int i = 0; i < m_Entries.Count; i++)
            {
                Reader.BaseStream.Seek(m_Entries[i].FileOffset, SeekOrigin.Begin);

                if (m_Entries[i].Compressed)
                {
                    m_Entries[i].FileData = new byte[m_Entries[i].DecompressedSize];

                    Reader.BaseStream.Seek(m_Entries[i].FileOffset, SeekOrigin.Begin);
                    m_Entries[i].FileData = Reader.ReadBytes((int)m_Entries[i].FileSize);

                    m_Entries[i] = Decompress(new BinaryReader(new MemoryStream(m_Entries[i].FileData)),
                        m_Entries[i]);

                    if (m_Entries[i] != null)
                    {
                        BinaryWriter Writer = new BinaryWriter(File.Create(Path + m_Entries[i].InstanceID.ToString() + "." + m_Entries[i].GetFileExtension()));
                        Writer.Write(m_Entries[i].FileData);
                        Writer.Close();
                    }
                }
            }
        }

        private DBPFEntry Decompress(BinaryReader Reader, DBPFEntry Entry)
        {
            uint CompressedSize = Reader.ReadUInt32();
            ushort CompressionID = Reader.ReadUInt16();
            Reader.ReadBytes(3); //Uncompressed size of file...

            int NumPlainChars = 0;
            int NumCopy = 0;
            int Offset = 0;

            MemoryStream Answer = new MemoryStream();
            BinaryWriter Writer = new BinaryWriter(Answer);

            if (CompressionID == 0xFB10 || CompressionID == 0xFB50)
            {
                bool Stop = false;

                while(!Stop)
                {
                    //Control character
                    byte CC = Reader.ReadByte();

                    if (CC >= 252) //0xFC
                    {
                        NumPlainChars = CC & 0x03;
                        if (NumPlainChars > Reader.BaseStream.Length)
                            NumPlainChars = (int)Reader.BaseStream.Length;

                        NumCopy = 0;
                        Offset = 0;

                        Stop = true;
                    }
                    else if (CC >= 224) //0xE0
                    {
                        NumPlainChars = (CC - 0xDF) << 2;
                        NumCopy = 0;
                        Offset = 0;
                    }
                    else if (CC >= 192) //0xC0
                    {
                        byte Byte1 = Reader.ReadByte();
                        byte Byte2 = Reader.ReadByte();
                        byte Byte3 = Reader.ReadByte();

                        NumPlainChars = CC & 0x03;
                        NumCopy = ((CC & 0x0C) << 6) + 5 + Byte3;
                        Offset = ((CC & 0x10) << 12 ) + (Byte1 << 8) + Byte2;
                    }
                    else if (CC >= 128) //0x80
                    {
                        byte Byte1 = Reader.ReadByte();
                        byte Byte2 = Reader.ReadByte();

                        NumPlainChars = (Byte1 & 0xC0) >> 6;
                        NumCopy = (CC & 0x3F) + 4;
                        Offset = ((Byte1 & 0x3F) << 8) + Byte2;
                    }
                    else
                    {
                        byte Byte1 = Reader.ReadByte();

                        NumPlainChars = (CC & 0x03);
                        NumCopy = ((CC & 0x1C) >> 2) + 3;
                        Offset = ((CC & 0x60) << 3) + Byte1;
                    }

                    if (NumPlainChars > 0)
                        Writer.Write(Reader.ReadBytes(NumPlainChars));

                    long FromOffset = Answer.Length - (Offset + 1);

                    for (int i = 0; i < NumCopy; i++)
                    {
                        //Answer += Answer.Substring(FromOffset + i, 1);
                        Writer.Write(BinarySubstring(Reader, FromOffset + i, 1));
                    }
                }

                Entry.FileData = Answer.ToArray();
                Writer.Close();
                Reader.Close();
            }
            else
                return null;

            return Entry;
        }

        private byte[] BinarySubstring(BinaryReader Reader, long Offset, int NumBytes)
        {
            long CurrentOffset = Reader.BaseStream.Position;
            Reader.BaseStream.Seek(Offset, SeekOrigin.End);
            byte[] Data = Reader.ReadBytes(NumBytes);
            Reader.BaseStream.Seek(CurrentOffset, SeekOrigin.Begin);

            return Data;
        }

I actually managed to inflate the data to 17megs at one point with this code (the testing file is only 13 megs), but characters were copied after each other in long sequences (I.E the inflation didn't work properly). The main problem with this code is that the Offset (parameter for the BinarySubstring() method) consistently wants the reader to seek to a position before the beginning of the stream, even when I seek from the end of the stream!
Where am I supposed to be searching FROM (Beginning, Current position or End of the stream), and why is it trying to seek to a position BEFORE the beginning of the stream?
10  The Bowels of Trogdor / The Large Intestines of Trogdor / Re: C# DBPF unpacking on: 2009 June 29, 06:14:52
Thanks!
But... exactly what tool are you referring to?  Huh
There seems to be alot of sourcecode in his SVN, but none that's Sims related.

Edit:

I tried using zlib's deflate() decompression, by way of SharpZipLib, but I keep getting 'Unknown block' type errors, even when I skip the 9 byte header. :\
Code:
        private byte[] Deflate(DBPFEntry Entry)
        {
            Stream S = new InflaterInputStream(new MemoryStream(Entry.FileData), new ICSharpCode.SharpZipLib.Zip.Compression.Inflater(true));
            MemoryStream MemStream = new MemoryStream();

            int SizeRead = 0;
            byte[] Buffer = new byte[2048];

            while (true)
            {
                SizeRead = S.Read(Buffer, 0, 2048);
                if (SizeRead > 0)
                    MemStream.Write(Buffer, 0, 2048);
                else
                    break;
            }

            return MemStream.ToArray();
        }
11  The Bowels of Trogdor / The Large Intestines of Trogdor / C# DBPF unpacking on: 2009 June 28, 21:28:09
Anyone have any idea how to implement the DBPF compression in C#?
I've been trying to convert the PHP version, but it just... keeps giving me -alot- of errors.

Code:
        /// <summary>
        /// Extracts and uncompresses, when neccessary, all the loaded entries in an archive
        /// to the specified path. Assumes that LoadArchive() has been called first.
        /// </summary>
        /// <param name="Path">The path to extract to.</param>
        public void ExtractFiles(string Path)
        {
            BinaryReader Reader = new BinaryReader(File.Open(m_ArchiveName, FileMode.Open));

            foreach (DBPFEntry Entry in m_Entries)
            {
                Reader.BaseStream.Seek(Entry.FileOffset, SeekOrigin.Begin);

                if (Entry.Compressed)
                {
                    Entry.FileData = new byte[Entry.DecompressedSize];
                    Decompress(Reader, Entry);
                }
            }
        }

        private void Decompress(BinaryReader Reader, DBPFEntry Entry)
        {
            uint CompressedSize = Reader.ReadUInt32();
            ushort CompressionID = Reader.ReadUInt16();
            Reader.ReadBytes(3); //Uncompressed size of file...

            int NumPlainChars = 0;
            int NumCopy = 0;
            int Offset = 0;

            string Answer = "";

            if (CompressionID == 0xFB10 || CompressionID == 0xFB50)
            {
                uint Length = Entry.FileSize;

                for (; Length > 0;)
                {
                    //Control character
                    byte CC = Reader.ReadByte();
                    Length -= 1;

                    if (CC >= 252) //0xFC
                    {
                        NumPlainChars = CC & 0x03;
                        if (NumPlainChars > Length)
                            NumPlainChars = (int)Length;

                        NumCopy = 0;
                        Offset = 0;
                    }
                    else if (CC >= 224) //0xE0
                    {
                        NumPlainChars = (CC - 0xDF) << 2;
                        NumCopy = 0;
                        Offset = 0;
                    }
                    else if (CC >= 192) //0xC0
                    {
                        Length -= 3;

                        char Byte1 = Reader.ReadChar();
                        char Byte2 = Reader.ReadChar();
                        char Byte3 = Reader.ReadChar();

                        NumPlainChars = CC & 0x03;
                        NumCopy = ((CC & 0x0C) << 6) + 5 + Byte3;
                        Offset = ((CC & 0x10) << 12 ) + (Byte1 << 8) + Byte2;
                    }
                    else if (CC >= 128) //0x80
                    {
                        Length -= 2;

                        char Byte1 = Reader.ReadChar();
                        char Byte2 = Reader.ReadChar();

                        NumPlainChars = (Byte1 & 0xC0) >> 6;
                        NumCopy = (CC & 0x3F) + 4;
                        Offset = ((Byte1 & 0x3F) << 8) + Byte2;
                    }
                    else
                    {
                        Length -= 1;

                        char Byte1 = Reader.ReadChar();

                        NumPlainChars = (CC & 0x03);
                        NumCopy = ((CC & 0x1C) >> 2) + 3;
                        Offset = ((CC & 0x60) << 3) + Byte1;
                    }

                    if (NumPlainChars > 0)
                    {
                        string Tmp = new string(Reader.ReadChars(NumPlainChars));
                        Answer += Tmp;
                    }

                    int FromOffset = Answer.Length - (Offset + 1);

                    for (int i = 0; i < NumCopy; i++)
                    {
                        if(((FromOffset + i) < Answer.Length) && ((FromOffset + i) > 0))
                            Answer = Answer.Substring(FromOffset + i, 1);
                    }
                }
            }
            else
                return;
        }

That's my current implementation, up there.
Even with these checks:

if(((FromOffset + i) < Answer.Length) && ((FromOffset + i) > 0))

It still fails on this line:

string Tmp = new string(Reader.ReadChars(NumPlainChars));

Saying: 'The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'.'

Sorry if this is posted in the wrong forum, but I don't seem to have permission to post in the 'Bowels' forums.
I've peaked at the SimPE source, but there's too many classes and it is generally a big mess.
Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.098 seconds with 19 queries.