Thursday, April 30, 2009

TMS Advanced Toolbars & Menus v3.7.5.0

What's new & improved in version New : SubMenuCaptionFont added in AdvPreviewMenu styler to have different title & caption font
New : PagerCaption.TextColorExtended added to allow to set different font for appname/docname in Window caption
New : Text property to put text on top of the shapebutton image
Improved : dropdown arrow on left docked toolbars painted in right direction
Improved : glass frame drawing on Windows Vista with Aero theme
Various smaller fixes & code improvements
All toolbar/ribbon components are also included in TMS Component Pack

How do you improve Quality

There typically are 3 types of projects, when it comes to quality that I have seen.
  1. The focus is getting the code out the door and then you pick up the pieces.
  2. The focus is on getting quality code out the door until deadline nears then you revert to number 1.
  3. The focus is on getting quality code out the door and as deadline nears features may be dropped and the deadline may slip.
When the project starts everyone seems to be on board with the idea of #3 but typically I have seen it slip all the way to #1 quite quickly.

At my current employer we have an internal application that is used by 300+ end users.

We used to have:
  • ZERO people dedicated to QA. (4-5 Delphi Developers, 1 Mainframe Programmer, no analysts)
  • Testing was done by end users testing as they had time.
  • We had no automated or unit tests.
  • The build process was compile from your own machine and copy it the network, with any developer free to release any code they wanted. Which would sometimes step over the other developers uncommitted code.
  • We had two environment's
    • Development used by both developers and testing.
    • Production
  • Exceptions had no details or context to help track down problems.
  • We had a very small and painful bug tracking system.
  • We would commonly be related to #1 type model above.
Over the few years that I have been here we have made many changes.
We now have:
  • We now have three people dedicated to QA. ( 7 Delphi Developers, 1 Database PL/SQL developer, and 1 Analyst)
  • We have end users still testing with the time that they have. But we now have better management support to get additional time needed.
  • We have a few automated tests, and a few more unit tests, but we really could use many more.
  • Our build process runs every hour through Final Builder, alerting us with email when someone checks in bad code.
  • We now have 4 environment's
    • Development (Dedicated to Developers only)
    • TEST where users get first look
    • QA a final look before production usually with data refreshed from production
    • Production.
  • We now use the JCL Debug to help use find those Hard to Reproduce Errors.
  • We have a better bug tracking system, but still not nearly as nice as some of the home grown solutions I have used at past employers.
  • We are now some where between the #2 and #3 model's above.

Every release we ask what can we do better? We try to learn from our mistakes and implement solutions to prevent problems. We release a new version of our software nearly every month, so we are continually improving, and quality has improved in many ways in the past few years.

However, today I feel awful!

We put out a release on Friday morning last week. I don't think I have ever had a release go bad as this one has. Today we are still working on fixing major problems that were caused by the release.
  • Some of which would have been caught by better testing.
  • Some of which would have been caught by better management of the information coming in about problems. (i.e. it was reported but not acted on!) Since, I usually manage this information, it's the reason I feel awful.
  • Of course like all systems some of them would have taken an unreasonable amount of testing to find.

So I have been thinking, it's time to go back to and look at quality in new and different ways.

I have been making a list for a while that specific to our product on ways to improve quality. Several items on the list are general enough to share with others.

  • Build a health check system detects common/possible problems that can be run daily in each environment's, which is very database specific. We have scripts for all of these, but not an automated way to run them.
    • Tables without primary keys
    • Missing Synonyms
    • Invalid Synonyms
    • Tables without indexes
    • Missing Grants
    • Disabled Constraints in Oracle
    • Tables that have not been analyzed recently.
    • FK Constraints without index on child table (table level lock will be placed on the parent table if these are found)
    • Invalid Database Objects
    • Plus several that are specific to our schema, i.e. constraints that can't be described in meta data.
  • Error Messages (Exceptions)
    • Ability to Send error messages to a web service
    • Ability to capture a screen shot with the error message.
    • Ability to ask users what they are doing when the exception occurred.
    • Ability to attach reported errors to items in bug tracking system.
  • Code Review to help identify and build additional unit tests around the code.
  • Automated Performance Testing Benchmark system. (I think DUnit has something that will work for us)
  • Get current Delphi Unit tests into an daily build and smoke test instead of just being run by hand by developers.


I have shared some of my ideas on how to improve quality, as well as some of the things we have done.

How have you improved Quality in the software you write?

I am open to any suggestions, this is just a small attempt to think out side of the box.

Since I just started this blog, I really hope that someone is reading this and has some good ideas they are willing to share.

Wednesday, April 29, 2009

Code Beautifier Collection 6 for CodeGear RAD Studio 2007 is an open source expert

Special Tip
The 7.0 series code named CandyCan is under development. A milestone will be provided soon.
Since its birthday in May, 2005, it has served hundreds of C#Builder 1.0, Delphi 8 for .NET, Delphi 2005, and Delphi 2006 users. Because RAD Studio 2007 (including Delphi 2007 and C++Builder 2007) is built on .NET 2.0 platform, 5.x generation support and update is discontinued and here comes a new 6.0 generation, which fully takes advantage of .NET 2.0 platform.
After installation, you can read the user manual (manual.pdf) to know more about Code Beautifier Collection and how it can improve your Delphi coding experience.

DeHL 0.3 is out

I’m proud to announce the 0.3 version of DeHL library. This release focused on bug-fixes and extensions to type support engine. This version should pretty much stabilize what I started and should leave me more time to extend other interesting parts of the library such as collections.

For those who are new, DeHL is a project of mine designed to fill in some holes in the Delphi’s RTL — most notably, some centralized and OOP-ish Date/Time support, more generic collection classes and “big integer” math support. DeHL also builds up a new layer of “type support” which is very useful to abstract all type management tasks in generic classes. This projects only supports Delphi 2009 (and above) since Generics, Anonymous methods, and Unicode are widely used whithin the library. So even if you are not interested in the library itself, it may prove a good read if you are interested in all new features Delphi 2009 has to offer.

New features:

  • Type support classes now expose methods to convert a type from an to a Variant.

  • Based on the newly added Variant conversion, a new type: TConverter<T1, T2> is present. You can use it to support “blind” conversion in a class.

  • Much improved type support system

    • Now more functionality is split between generic and non-generic variants of the type support classes (IType, IType<T>, TType, TType<T>).

    • TypeSupport has been renamed to TType and IType repectively.

    • IType/TType now export Name, Size, Management, TypeInfo and Family properties. You can use those properties to get more information about your generic type.

    • Custom type registration has been improved. A new cleaner API can be used to register you custom types into the DeHL’s type system.

    • TType<T>.Default is now an overloaded function. The first form is the usual one. The second form receives a set of “type families” that are expected to represent the generic type. This way you can effectively impose run-time type restrictions.

    • TClassType is now generic with T being class-restricted. This allows to avoid compile-time type incompatibilities.

  • All important standard types in Delphi are now supported. This support required the working custom types system, since these types can’t be handled “by default“:

    • Boolean, ByteBool, WordBool, LongBool

    • TDateTime, TTime, TDate

    • NativeInt, NativeUInt

    • Real

    • OleVariant

    • UCS4Char, UCS4String

    • UTF8String

    • RawByteString

Bug fixes:

  • Fixed a bug in BigCardinal and BigInteger variant types related to string conversion.

  • A few changes in TDictionary to avoid compiler problems.

  • In a multi-threaded environment, the custom type support would not unlock properly leading to a dead-lock.

  • Cleanup routines for TDictionary and THashSet were flawed since those used FillChar to clear the array. Managed types had problems with that.

  • Char type support class now acts properly.

Note: It may be possible that the Testing project will not compile due to a bug in the Delphi compiler. But that should not prevent the usage of the library itself.

Tuesday, April 28, 2009

Steganography with Delphi - Hide Data Inside Images

According to Wikipedia, Steganography is "the art and science of writing hidden messages in such a way that no-one, apart from the sender and intended recipient, suspects the existence of the message, a form of security through obscurity".

In computer terms, steganography means hiding a message within a larger one in such a way that others cannot see the presence or contents of the hidden message.

Interested in how to code a Delphi application to hide data inside an image using cryptography - steganography?

StegaImage - Steganography with Delphi
StegaImage is a little Delphi application to hide files (information) inside crypted images.
Take a file, hide it inside a BMP image, and that's all. All you get is a new image that looks like the original, but it has the data hidden inside it.

How does StegaImage work?
In a 24-bit bitmap, each pixel is made up of a 24-bit number. Each number is composed of three 8-bit numbers (the R, G and B channels). These are the intensity of the Red, Green and Blue colors that create the final color of the pixel.
To hide something inside the image, we will replace the Least Significant Bit (this is, the "rightmost" bit) of each 8-bit channel of every pixel, with the bits of the file we want to hide.

The image will lose some quality because now the colors of the pixels are not the same, but it will go unnoticed to the human eye.

Obviously, since we are storing only 3 bits per pixel, the image must have a phenomenal size to accommodate just a tiny little file.

For example, if we want to hide 1 MB of data, we need an image with 2,796,203 pixels, which would have a size of something like 2,200 x 1,320 pixels. And that is a 8.3 MB file!
We could replace the 2, 3 or 4 rightmost bits of every channel in order to increase the amount of data we can hide, but the quality could decrease considerably.
Try 8 bits per channel and you'll see what I'm talking about (amazingly, 7 normally produces intelligible - although very ugly- images).

Why should you always use FreeAndNil instead of Free

Well, when someone talks about object creation and destruction, he usually imagine something like this:

However, I want to show that you should avoid “Free” call, whenever possible - by replacing it with call to FreeAndNil, for example:

Please, note, that I mean replacing Free -> FreeAndNil everywhere. I talk not about just using FreeAndNil, when you want to use “Assigned(SomeObj)”, but I really mean everywhere - by do not using Free at all. Yes, including cases with local variables.

Why? The reason is simple: there is no reason for not-doing it. And this post explains why it is so.

When you tell this to common Delphi developer, there are high chances that he’ll vote against it, saying:

1. Free + nil is not full equivalent of FreeAndNil. FreeAndNil clear pointer first and calls destructor second. If your code uses this reference - it will no longer work. For example, if we clear class field, but some internal class uses it during destruction process.

Well, I count it as advantage. It helps to catch bad-design. Because you just described a case, when your code either uses variable instead of Self (i.e. Form1.SomeObj instead of SomeObj) or accesses a partial-constructed object (actually, it is more like partially deleted object now). And by using FreeAndNil you can catch these cases.

Yes, such situations can be made “by-design”, but even so - you will detect an error immediately (obviosly, there will be Access Violation), so you can revert changes back, if you don’t want to redesign your code.

2. So what are examples of righteous/warrantable usage of FreeAndNil?

Hmm, it is not very clear, what example can be provided here. Excluding complicated examples (like in previous block), your code will work the very same way - with or without FreeAndNil. Replacing Free -> FreeAndNil - it is a pure optional action.

You can compare FreeAndNil with seat (safety) belts in cars: if you application runs normally - then FreeAndNil won’t be useful. But if your code mess up with something - then FreeAndNil (as seat belts too) will protect you from consequences. By clearing the reference, FreeAndNil will help you to catch your wrong access immediately. Without it (i.e. by using Free only), your code may continue to run (even without raising an exception) and gave the wrong results or damage global state. It is quite dangerous.

Note, that FreeAndNil is still not enough to fully cover those bad cases - because you can access one object through multiply variables. FreeAndNil will clear one reference, but it won’t touch others. A good example here is all kinds of lists of objects.

3. There is no need for FreeAndNil here! (i.e.: local variables)

Well, local variable can be refactored into global later. FreeAndNil will protect you from misuses (just Free is not). Large amount of code is done by using copy-paste. That is why someone can pick your code, copy it into another place (where variable has different scope or is used multiply times), then he’ll be in trouble - if you didn’t use FreeAndNil (and this “he” may be even you, but few months later). Besides, if you have large routine then you may just not notice, that you have used variable few times (for example - in cycle). By always using FreeAndNil you’ll make your code bullet-proof for modifications.

Moreover, it is just convient, when you have FreeAndNil everywhere, instead of mix of Free/FreeAndNil. And you don’t even need to think about it: “gosh, should I put FreeAndNil here or just Free is enough?!!”.

If you think that FreeAndNil is still overkill in certain case - then why don’t you use Destroy call? Indeed, the Free is overkill in many situations too!

Note, that object’s destructor is not called in 99% of Delphi’s code (a call to procedure is used instead). Then why don’t we take a step further and don’t use FreeAndNil instead? The benefit of using FreeAndNil is far greater then benefit from using Free instead of Destroy. Why? Well, if the first case we got the protection from very tricky mistakes (like I said - it is not panacea, but still a good bonus). And in the second case you got only ability to skip writing additional “if”. Why? Because if you call Destroy for nil-variable, you’ll get an AV immediately (because Self = nil, so any access to object’s field will trigger an undoubted AV). So, there is no problem at all - you will fix the code instantly. Compare this with case, when your code silently produce wrong results!

Furthermore, you can not giveup for a bonus “less writing” by implementing a routine “F”, which simply calls FreeAndNil.

4. Using Free instead of Destroy is recommended by CodeGear! And I never heard of such recommendation for FreeAndNil.

Well, we already examined arguments that Free -> FreeAndNil transition will have more benefits than (already happened) transition Destroy -> Free. Now, don’t you think that argument of not having official approval from CG does look quite dull?

5. Nevertheless, majority of people (who heard about this idea) will continue to claim that this is still overkill.


The most called reason is the force of the habit: “I write Free automatically, even without thinking”.

Well, actually, this argument is quite serious.

By in this post, I want to show you that there are good reasons to reconsider/change this habbit. Of course, there should be a really good reson for broking your habbits. And here it comes…

6. Safety of your code.

Many people says (and they, indeed, believe so) that experienced programmer can use FreeAndNil only in nessesary places. In all other cases there is no need to clear the object’s reference - so you can use Free.

But that is so wrong: you can not know that.

Imagine, that destructor of your object deletes its object-field by using Free (such common action, right?). The destructor for this sub-object calls a virtual method as part of its destruction process. This method can be empty in base class. Well, so far so good.

Now, someone (or may be you, few months later) will take these classes and override this virtual method. Now he will call another (virtual) method, which belongs to your first mentioned object. And this method (may be also in yet another child class) will try to use already deleted field in your first object. Suppose that this access will run ok (i.e. memory manager do not release this memory yet), but the global state of the application will be damaged forever. Ooops.

FreeAndNil will guard you from this situation (by raising 100% AV), and Free is not.

Why did I mention virtual methods here? That is why you cann’t say: “I know what I am doing - there is no need for FreeAndNil here”. As you can see - you can not know that! Yes, two of your base classes is all good with it, but the very usual code in child classes can lead to the darkest error in the entire class hierarchy. Yes, it is not actually your fault (unless you are also responsible for child classes too), but shouldn’t your code be ideal? And, yes, there is a bug in child class (by calling inadmissible method), but how will you catch it? Without FreeAndNil it is very hard to do.

Okay, even if you still insist on putting FreeAndNil only, when it is nessesary - then we just gave an example, when it is simply not obvious. If you have a habbit of using Free, then you can not even think about considering this case as worth adding FreeAndNil! That is where all problems come. You’ll just put Free call here (as usual) and go somewhere else. And then spend a few days for debugging this tiny, nasty problem later.

By using FreeAndNil everywhere you’ll dispose those problems (should I put it there or not?) - just put it everywhere! Like I said: FreeAndNil - is a safety belts. And realizing it is the only thing left.

Summary: from my point of view - the bonus of protection against fickle/implicit errors outweigh the need to change the habbit.


Why wait? Why don’t use FreeAndNil everywhere, starting today? If you’ll make this your habbit - then you lose nothing, but gain very powerful bonus.

Nevertheless, I suspect that many Delphi programmers, even if they admit this idea, won’t change their practice - just because “someone has said something”. They do not meet with this situation - then it do not exists. So they’ll wait until the bug strike them hard. I’ll repeat it again: FreeAndNil is a safety belt. It protect you from bugs, that not happens yet.

Remark: this long post is just my opinion - feel free to reject it, if you (still) don’t like it.

When you reference an object via more than one reference, you can mistakenly use already disposed object or free the very same object twice. The FreeAndNil gives you protection only if you mistakenly use the object after its dispose, when you have only one reference.

So, even if you use FreeAndNil, you still need an additional checks. And if you have large project and suddenly changes all Free calls to FreeAndNil - then there are good chances then you’ll run into problems. You’ll probably detect many kind of misuses. To track down these problems you’ll need a tool.

Such checks can be found in EurekaLog. We will talk about its memory-related features in more details later (yes, about FastMM too), and now I want to give only brief overview.

Once enabled, this feature can detect a various memory misuse cases and report them through usual exception notification process in EL. Those cases includes double-free and writes to already freed memory. Similar features are very common in advanced memory managers (like FastMM, for example).

It is quite powerful feature, but just don’t forget that EurekaLog is a debugging tool. It is not essential part of Delphi language. You should not write code, which relies on its presence. I.e. you should not forget about FreeAndNil, hoping that EurekaLog will handle all bad cases for you. Besides, there still can be complex (and, actually, rare) situations, when EurekaLog’s features can not catch attempt of invalid memory access (it is not limitation of EL, the same is correct for other tools too). More on this topic later.

Apart from using FreeAndNil and diagnostic features in EurekaLog or MM, you should try to minimize references to each objects and try to use some sort of automatic management. For example, if your object can be included in the list, then it is good to add code, which removes object from any list it belongs to on deletion (there is an example in VCL - it is Components list-property).

Another approach is using interfaces with reference counting. Well, actually, you should be careful there: if you mix manual control of life-time (manual call to destructor) and automatic control (using reference counting in interfaces) - then things will go from bad to worse. But if you use only pure interfaces, then you’ll automatically get rid of such errors: compiler (well, your application at runtime) will look after your objects, not allowing their misuses.

Monday, April 27, 2009

Storing Resource (WAV, MP3, ...) into Delphi Executables

Games and other types of applications which use multimedia files like sounds and animations must either distribute the extra multimedia files along with the application or embed the files within the executable.
Rather than distribute separate files for your application's use, you can add the raw data to your application as a resource. You can then retrieve the data from your application when it is needed. This technique is generally more desirable because it can keep others from manipulating those add-in files.

This article will show you how to embed (and use) sound files, video clips, animations and more generally any kind of binary files in a Delphi executable. For the most general purpose you'll see how to put a MP3 file inside a Delphi exe.

Resource Files (.RES)

In the "Resource Files Made Easy" article you were presented with several examples of the use of bitmaps, icons and cursors from resources. As stated in that article we can use the Image Editor to create and edit resources that consist of such types of files. Now, when we are interested in storing various types of (binary) files inside a Delphi executable we'll have to deal with resource script files (.rc), the Borland Resource Compiler tool and other.

Including several binary files in your executable consists of 5 steps:

  1. Create and/or collect all the files you whish to put in an exe,
  2. Create a resource script file (.rc) that describes those resources used by your application,
  3. Compile the resource script file (.rc) file to create a resource file (.res),
  4. Link the compiled resource file into the application’s executable file,
  5. Use individual resource element.
The first step should be simple, simply decide what types of files you would like to store in you executable. For example, we will store two .wav songs, one .ani animations and one .mp3 song.

Before we move on, here are a few important statements concerning limitations when working with resources:

a) Loading and unloading resources is not a time consuming operation. Resources are part of the applications executable file and are loaded at the same time the application runs.

b) All the (free) memory can be used when loading/unloading resources. In other words there are no limits on number of resources loaded at the same time.

c)Of course, resource file do double the size of an executable. If you want smaller executable consider placing resources and parts of your project in DLLs and Packages.

Let's now see how to create a file that describes resources.

Creating a Resource Script File (.RC)

A resource script file is a just a simple text file with the extension .rc that lists resources. The script file is in this format:
ResName1 ResTYPE1 ResFileName1
ResName2 ResTYPE2 ResFileName2
ResNameX ResTYPEX ResFileNameX
RexName specifies either a unique name or an integer value (ID) that identifies the resource. ResType describes the type of resource and the ResFileName is the full path and file name to the individual resource file.

To create a new resource script file, simply do the following:

  1. Create a new text file in your projects directory.
  2. Rename it to AboutDelphi.rc.
In the AboutDelphi.rc file, have the following lines:
Clock WAVE "c:\mysounds\projects\clock.wav"
MailBeep WAVE "c:\windows\media\newmail.wav"
Cool AVI cool.avi
Intro RCDATA introsong.mp3
The script file simply defines resources. Following the given format the AboutDelphi.rc script lists two .wav files, one .avi animation, and one .mp3 song. All statements in a .rc file associate an identifying name, type and file name for a given resource. There are about a dozen predefined resource types. These include icons, bitmaps, cursors, animations, songs, etc. The RCDATA defines generic data resources. RCDATA let you include a raw data resource for an application. Raw data resources permit the inclusion of binary data directly in the executable file. For example, the RCDATA statement above names the application’s binary resource Intro and specifies the file introsong.mp3, which contains the song for that mp3 file.

Note: make sure you have all the resources you list in your .rc file available. If the files are inside your projects directory you don't have to include full file name. In my .rc file .wav songs are located *somewhere* on the disk and both the animation and mp3 song are located in the project's directory.

Creating a Resource File (.RES)

To use the resources defined in the resource script file, we must compile it to a .res file with the Borland's Resource Compiler. The resource compiler creates a new file based on the contents of the resource script file. This file usually has an .res extension. The Delphi linker will later reformat the .res file into a resource object file and then link it to the executable file of an application.

The Borland's Resource Compiler command line tool is located in the Delphi Bin directory. The name is BRCC32.exe. Simply go to the command prompt and type brcc32 then press Enter. Since the Delphi\Bin directory is in your Path the Brcc32 compiler is invoked and displays the usage help (since it was called with no parapeters).

To compile the AboutDelphi.rc file to a .res file execute this command at the command prompt (in the projects directory):

BRCC32 AboutDelphi.RC
By default, when compiling resources, BRCC32 names the compiled resource (.RES) file with the base name of the .RC file and places it in the same directory as the .RC file.

You can name the resource file anything you want, as long as it has the extension ".RES" and the filename without the extension is not the same as any unit or project filename. This is important, because by default, each Delphi project that compiles into an application has a resource file with the same name as the project file, but with the extension .RES. It's best to save the file to the same directory as your project file.

Create Manifest for Vista UAC if Not Running Under Administrative Rights

From a developer perspective Windows Vista's UAC can be problematic for some parts of your Delphi application, if the application is not being run by an administartor. One such operation is writing to the Registry database.

Learn how to "request admin rights" by creating an application manifest file....

Windows Vista - User Account Control
User Account Control is a security component in Windows Vista. UAC enables users to perform common tasks as non-administrators, called standard users in Windows Vista, and as administrators without having to switch users, log off, or use Run As.
To help prevent malicious software from silently installing and causing computer-wide infection, Microsoft developed the UAC feature.

From a developer perspective the following UAC features are important:

All processes are started as Standard User as default
A Standard User can not:
Change files in Program Files folders
Change files in Windows or System32 folders
Change registry under HKLM\Software
Change the local machines date and time
...the list continues...
Programmatically Edit Registry to Run Your Delphi Application on Windows Startup
By programmatically editing the Windows Registry, using the TRegistry object, you can you can "automagically" start programs whenever Windows launches.
The procedure you can use to force "auto-run-on-Windows-startup" for your application might look like:

procedure RunOnStartup(const sCmdLine: string; bRunOnce: boolean = false; Remove: Boolean = false) ;
sKey: string;
Section: string;
ApplicationTitle = ”Your Application TITLE”;
if (bRunOnce) then
sKey := 'Once'
sKey := '';

Section := 'Software\Microsoft\Windows\CurrentVersion\Run' + sKey + #0;

with TRegIniFile.Create('') do
if Remove then
DeleteKey(Section, ApplicationTitle)
WriteString(Section, ApplicationTitle, sCmdLine) ;

On Vista, if the user running the application does not have admin rights the above code would fail, due to UAC!

Faking UAC Rights - How to Request Execution Level

Even if the user running the above code is not an admin, you can, as a developer arm your application with a special kind of embedded resource: application manifest file.

Having the manifest file will ensure Vista's UAC will allow your code to execute.

Here are the steps:

  1. Create XML file with following content:
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0]
    <assemblyIdentity version=""
    <description>elevate execution level</description>
    <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2]
    <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/>

  2. Name this XML file as YourApplicationName.manifest
  3. Create a text file with following content:

    1 24 "YourApplicationName.manifest"

  4. Name this text file as YourApplicationName.RC
  5. using the command line execute following command:

    brcc32 YourApplicationName.RC -foYourApplicationName.REC

    This will create a new resource file called YourApplicationName.REC

  6. Copy this YourApplicationName.REC file in to the resource path of your application.
  7. Include this resource file into the DPR of you application, as like:

    {$R YourApplicationName.REC}

  8. Finally build your application - it is now ready to get admin rights on Windows Vista.
Note 1: in the above steps, replace "YourApplicationExeName" with your actual application name.

Note 2: Above steps create a resource file to be stored inside the EXE file of your application. More on .

How to Automate the Above "Create Manifest" Process

To help you automate the process I have written a small program that does all the dirty work for you.

Getting Started with Delphi and C++Builder 2009

As someone who hires people to document RAD Studio, one of my challenges has been to find training materials for the brand new user. There are excellent sources for experienced users, but the fresh graduate who may have learned C or Java in school did not find a lot of help for learning the latest RAD Studio.

To remedy this, my team put together a new Getting Started guide that describes the UI and a few language basics. Led by Alex Ciobanu, tech writer Lucian Bentea and editor Denisa Ilascu authored the book with the able assistance of Andrei Susnea (on C++Builder) and Alex (on Delphi). Reviewers Kris Houser, Gary Staas, and Tim del Chario gave good feedback. Raluca Dudau was our first "new user test driver" and proved the point that a good training experience sets the basis for future success.

This book will be followed by a series of language guides the team is currently writing, in between their work on new features and perennial updates to the VCL/RTL.

Sunday, April 26, 2009

Hint for Delphi Prism

Why is it all in English? Can I switch the language? Why does the IDE sometimes use multiple languages?

In a recent workshop Olaf Monien and I held together, the question came up if there was a German version for Delphi Prism. Furthermore, people asked us how to switch the language if possible.

Well, Embarcadero provides English, German, French and Japanese translations for Prism. However, the language cannot be selected, because it is determined by the language of the Visual Studio IDE that Prism is being installed into.

Thus, if you have an English version of Visual Studio, Prism will also be in English. Consequently, you will need to install the German version of Visual Studio in order to use the German localization of Prism.

Personally, I am also not very happy with this solution as I would like to be able to switch the languages as well. I solved the issue by using two virtual machines.

Finally, regarding the last issue mentioned in the caption, a lot of messages being shown inside the IDE are taken from resources inside the .NET framework. The framework is independent from Visual Studio and can be in a different language. Thus, if you have not installed the .NET German language package , the error messages for all exceptions thrown by the framework will be in English (or in the language that a language package has been installed for). As Visual Studio uses MS Build to compile or build .NET projects, these messages will always be taken from the framework. This is frequently the first time one notices that the framework is missing a language package when you are running inside a German Visual Studio IDE and the compiler messages are all in English.

To sum it up, one has to be very considerate which products and language packages one installs in order to end up with the development environment in a language one wants to work in.

Saturday, April 25, 2009

Generic Methods and Type Inferencing

In other words - as I understand it - Tiburón/Delphi 2009 will not (initially at least) support type inferencing. To my mind this dramatically reduces the attractiveness of Generic Methods.

What Is Type Inferencing?

As the term suggests, (in this context at least) it is the ability of the compiler to infer the type of some symbol (variable or parameter etc) from the context or code around it.

In C# for example, it means that variables can be declared:

  var a = 10;

The compiler will infer from this that the type of variable a is int (you knew that, right?) without this having to be explicitly declared.

Pop Quiz: What do we think would/should the type of a be if Delphi were to support this?

Hint: What type would a be if it were a constant?

In general I find this to have dubious value, what, for example, can you deduce about the correct usage of a from this:

  var a = someRef.DisplayValue;

To know the type of a you know need to know the type of someRef in order to in turn know the return type of DisplayValue. If browsing a project with all references intact and Code Insight, or equivalent, to deliver this information to you, then all good. But if not, you are going to have a sticky time of it, and either way you still have to trust that the person that wrote the code - which perhaps wasn’t you - also knew those things and didn’t make their own inferencing error.

But what does type inferencing have to do with Generic Methods?

To understand that let’s look at why Generic Methods even come into it.

It all started this evening when I found myself needing to add another overload to my set of Exchange() methods, two of which I show here to give you the idea:

  procedure Exchange(var A, B: Integer); overload;
procedure Exchange(var A, B: TObject); overload;

(The Exchange procedure exchanges the values of the two passed parameters)

My problem was that because the parameters to these procedures are, by necessity, var parameters, the compiler enforces strict type checking, so the second of the above declarations cannot be used with variables of a TObject derived type, but only variables explicitly and specifically of TObject type itself:

obj1, obj2: TObject;
form1, form2: TForm;
Exchange(obj1, obj2); // OK - references will be exchanged
Exchange(form1, form2); // ERROR: No compatible overload

The second call to Exchange() will not compile because the TForm type parameters are not compatible (from the compiler’s perspective) with the TObject overloaded version of the routine. I have to add another overload with explicitly TForm type parameters.

Not for the first time recently I found myself thinking “If only I had Delphi 2009 - Generic Methods would make this so much easier!”. Then I remembered reading Barry Kelly’s note about lack of type inferencing in Delphi 2009, and an alarm bell started ringing.

The Generic Solution

A generic implementation of an Exchange() procedure should be simple enough, and would go a little something like this:

  procedure Exchange<T>(var A, B: T);
i: T;
i := A;
A := B;
B := i;

Which is nice and neat and cuts down on all those overloads. Unfortunately however, using this procedure is actually now more cumbersome, not less:

obj1, obj2: TObject;
form1, form2: TForm;
Exchange<TObject>(obj1, obj2);
Exchange<TForm>(form1, form2);


Frankly for the time it takes to create a new overload of a trivial routine like this, I would rather take those few seconds and reap the rewards later since - in common with most such routines - the code for the procedure will be written only once but code to call it will be written many, many times.

Voices Off: “But lots of overloads will pollute your namespace!”

You know what? That bothers me a lot less than creating unnecessary work for myself. I’m not the sort of developer that is unable to produce a line of code without invoking Code Completion. I may be unusual in this day and age, but I still write code faster than I can pick it from drop-down lists.

And this in Win32 Delphi at least is surely the biggest impact? As far as the code goes, any unused overloads will be pruned out by the linker. In .NET - aiui - a cluttered namespace becomes a public nuisance, so the imperatives are somewhat different perhaps.

How Would Type Inferencing Help?

Well, assuming that any future Delphi type inferencing system could determine the appropriate type from the parameters (as it can in C#) then we could invoke our generic Exchange<T>() method by simply writing:

obj1, obj2: TObject;
form1, form2: TForm;
Exchange(obj1, obj2);
Exchange(form1, form2);

i.e. just as we can do with overloads.

But until this is possible at this stage I was feeling that I would continue with overloads until - at least - type inferencing were available. They are just as type-safe and ironically produce “consumer” code that should be entirely compatible with a future generic methods implementation that is bolstered by type inferencing.

Using generic methods without type inferencing (in these sorts of cases at least) will simply create unnecessarily verbose and cumbersome “consumer” code.

But not being one to give in, I considered some alternative approaches.

A Truly generic Approach?

Alternatively, Delphi already provides a means to implement a truly generic (lower case “g’) Exchange() method - untyped parameters:

  procedure Exchange(var A, B);
i: ?
i := A;
A := B;
B := i;

You will immediately notice of course that this implementation is neither valid nor complete. The type of i (for “intermediate”, if you were wondering) is not known and indeed not knowable. And furthermore, the compiler simply won’t accept that A := B assignment since it doesn’t know the types of A and B it cannot know what instructions are needed. For the same reason, the XOR trick won’t work either. Those untyped parameters seem to have lead us to a dead end.

Not quite. (If you are squeamish and/or don’t like smelly code you might want to avert your gaze about now, or at least pinch your nose):

  procedure Exchange(var A, B);
aa: Integer absolute A;
bb: Integer absolute B;
i: Integer;
i := aa;
aa := bb;
bb := i;

The absolute keyword is not something to be used lightly - it tells the compiler that the variables aa and bb exist at the same location as the parameters A and B - in this case the compiler even takes care of the fact that the parameters are passed by reference.

Since the variables are typed and the parameters are untyped, how can this be safe?

Well, frankly it isn’t.

In practice if the type of A and B is not the same size as an Integer (32-bits) then things are not going to go at all according to plan.

Equally of course though, if you only ever call this implementation routine with 32-bit sized parameters (which includes strings, object references etc) there won’t be a problem. “IF”.

I should point out at this stage that I explored this approach as a curiosity. I am certainly NOT recommending it! For one thing it is not “truly generic” at all - it only appears to be but in fact has fairly strict conditions for correct use, and does not benefit from any assistance from the compiler to ensure that you do in fact use it correctly!

But we have one more trick up our sleeve - there is (at least) one more way to skin this particular cat that is safer than an untyped parameter approach and only a little more cumbersome to use than a non-type-inferenced Generic approach.

X-Rated Code - Being Explicit

A var parameter is syntactic sugar for passing by referencing and allowing modification of the de-referenced value. We can of course achieve the same thing by taking care of the de-referencing aspects ourselves and explicitly passing references to values, rather than the values themselves:

PObject = ^TObject;


procedure Exchange(const A, B: PObject);
i: TObject
i := A^;
A^ := B^;
B^ := i;

And to call this:

obj1, obj2: TObject;
form1, form2: TForm;
Exchange(@obj1, @obj2);
Exchange(@form1, @form2);

All of which sits happily alongside any other overloaded versions of Exchange(), but which unfortunately requires consumer code that will not be compatible with a future Generic Method implementation of the routine.

BUT, even this won’t work if we are compiling with Typed @ Operator option enabled.

So I’m left a little stumped.

There are many ways to go about this. If it weren’t for the fact that Generic Methods are nearly upon us I would favour the explicit de-referencing approach. But the desire to create code today that will be compatible with impending new language features is quite compelling. Then again, until we also get type inferencing, a Generic Methods based approach isn’t going to be compatible with anything we can write today anyway.

Despite myself, the untyped parameter approach could yet prove too tempting to resist (particularly if I find myself needing yet another class-specific version of Exchange()) since all the types that I’ve ever found myself wanting to Exchange() meet the 32-bit criteria.

It really hinges on when we might see type inferencing in a Delphi compiler. If that is something we are likely only to see in Commodore then worrying about compatability of a few calls to Exchange() could prove somewhat misplaced given the far wider issues likely to arise from the move to a 64-bit compiler.

A Compromise?

In the meantime I wonder whether it would not be possible to have a compiler option to disable such strict type checking on var parameters, ideally on a method-by-method basis.

After all, we still have such an option for short string var parameters (although having never had cause to use it I don’t know if it works in quite the way I have in mind - i.e. on declarations of rather than calls to, methods).

In the case of class-type var parameters the more relaxed type checking could allow parameters of any type correctly derived from the formal type:

procedure Exchange(var A, B: TObject);


procedure Exchange(var A, B: TObject);

Then again, I can’t think of any other concrete examples where such a capability would have any practical use. But if this were possible to implement more quickly and safely than a comprehensive type inferencing system, I think I’d take it to keep me going.

Proposal for Automated Variables

Yesterday I logged a Quality Central report proposing the addition of support for “automatic variables” to the Delphi language. Not only is it an excellent idea (in my humble and utterly objective opinion :)), but there is already a keyword in the language that could be co-opted for this purpose, a keyword that has been at something of a loose-end since it was deprecated (rendered obsolete even) a long, long time ago…

The language keyword in question is automated. This was introduced in Delphi 2.0 as part of the initial implementation to support COM automation and, if memory serves, deprecated in the very next release when “proper” (albeit COM) interfaces were added to the language.

The functionality that AutoFree() provides is similar to the concept of an auto pointer - a specific variant of the general concept of a smart pointer. “auto”… “automatic”… “automated”… the similarity in the terms, and the relevance of the semantics, is striking. To me at least.

The proposal in Quality Central drew inspiration directly from exchanges in the comments on my post on an AutoFree() implementation and a realisation that the required behaviour is very similar to that already implemented for interface references - it would not be entirely alien to the Delphi language.

Indeed I believe it would be quite easily understood and welcomed by most, if not all, developers.

The Proposal

The automated keyword should be supported as a decoration on variable declarations. That is local variables, member variables and unit variables:


TFoo = class
fBar: TBar automated;


_Bar: TBar automated;

function FooFn;
bar: TBar automated;

The rules for the keyword and the effect of it shall be as follows:

- The automated keyword shall be valid only for pointer and object reference type variables. (*)

- When marked as automated the compiler shall emit code to initialize a variable to NIL. This already occurs for local variables of certain types, most notably interface references, as well as all member variables (albeit indirectly in that case), and currently has to be specified directly, if required, for unit variables.

- When marked as automated the compiler shall emit code to finalize a variable in a manner appropriate to it’s type. For object references this shall be a call to Free; for pointers a call to FreeMem(). This is directly equivalent to the code already emitted by the compiler to finalize interface references by calling Release().

- automated would not be combinable with absolute.

(*) - it could also be supported on record types with the proviso that the record type in question supports a parameterless constructor (to be called to initialize the record) and a lone destructor (called to finalize the record). But to keep things simple lets stick to object references and pointers, for now at least.

The effect on code of the use of this keyword would be to facilitate:

1. resource protection for temporary objects held in local variables without the need for try..finally blocks.

2. reliable clean up of dependent objects in object hierarchies without the need for objects to implement a destructor (solely) to free those dependent objects. Destructors may still be necessary for other purposes of course.

3. reliable clean up of unit (a.k.a “global”) objects without the need for a unit finalization. Again, finalization may still be required for other purposes.

Note however that it would not prevent these existing techniques from functioning, if required or preferred.

There is only one possible danger that I foresee, which is that a developer might mark a variable as automated but then dispose of the referenced object/memory explicitly without re-initializing the variable. e.g.:

  procedure SomeFn;
bar: TBar automated;
bar := TBar.Create;

In this case, when SomeFn exits, the finalization of bar will likely result in an error since bar has been left holding a reference to an object that has already been Free‘d.

Note that the initialization of bar as NIL (as a consequence of being automated) specifically avoids any problem if bar is only assigned a reference conditionally in the code.

Note also that explicitly disposing an automated variable is not in and of itself problematic, as long as the variable is also then explicitly re-set to NIL. In the above example, if bar had been NIL‘d once freed, or FreeAndNIL() had been used, then there would not be any problem with the automated behaviour of bar.

The code below illustrates safe explicit disposal and potentially conditional assignment of an automated reference:

  procedure SomeFn;
bar: TBar automated;
bar := TBar.Create;

// do some work with "bar"


if SomeCondition then
bar := TBar.Create;
// do more work with a new "bar"

This code is perfectly safe, will not result in a runtime error and will not leak a TBar.

The potential dangers and pitfalls of an automated variable behaviour implemented as described are actually no different to the potential dangers and pitfalls associated with the manual techniques that it could replace.

I should also mention that I cannot see that the proposal described here would necessarily interfere with, or be interfered with by, the existing, deprecated usage of the automated keyword.

A final observation is that this implementation “feels very Pascal’ly” to me. In a good way.

Raising The Dead

What can possibly go wrong in the simple act of modifying the message of an exception to add some additional diagnostic information and then re-raising it?

Quite a lot actually, and all from one simple mistake.

Here’s a simplified version of a fairly common construct that you will encounter in Delphi code:

a: Integer;
a := 0;
Caption := IntToStr(100 div a);
on e: Exception do
e.Message := 'Oops: ' + e.Message;
raise e;

The intention is to add some additional diagnostic information to an exception message to assist in debugging (assuming it is not subsequently handled by some exception handler further up the stack), but without handling the exception itself. i.e. to re-raise the exception.

Fairly innocuous and entirely straightforward - most experienced Delphi developers can probably do this in their sleep.

Which is I think what happened here - sleep-coding I mean - because if you compile and run this code your exception handling will cause an apparent meltdown in your application. After the initial exception is reported - seemingly normally - you will then encounter access violations and even external exceptions.

Some of you may have spotted the problem already, but it took me 20 minutes of trying to figure out why such seemingly simple and harmless code was causing access violations and external exceptions etc etc.

How can raising an exception do that? Perhaps the method was being invoked on an invalid instance or some other similar bad pointer was getting involved in the situation somehow (which was far more complex than this simplified version).

The problem of course lay in the mostly harmless looking:

      raise e;

To re-raise an exception you do not specify an exception instance. You only do that when raising a brand new exception. By referencing the “caught” exception instance “e” in the raise statement, the runtime will naively handle that exception (destroying the exception instance) and then goes right ahead and raises that destroyed - dead - instance.

The exception instance gets handled AND re-raised and when the already handled (destroyed) exception arrives at another exception handler, all hell breaks loose.

It strikes me that the compiler/RTL between them should be able to tell when you do this and treat “raise e” as simply “raise” when “e” is a reference to the currently-being handled exception.

But for now, it’s something to be wary of.


There’s an entirely clean triple-entendre in this post’s title.

1. The technical aspect covered above - raising a handled (dead) exception

2. Stress related health concerns that have been bothering me for the past 12 months or so and which came to a head at the end of last year, are hopefully if not behind me then at least waving as I sail them by on the freeway. I did not actually die - quite obviously - but believed I was close to it on more than one occasion. and which caused me to take a break from engaging in the Delphi community including this blog. My activity may not be as high as it once was, at least for a while, but I am re-raised at last.

3. It is no exaggeration to say that a factor, albeit a small one, in my recovery has been the undoubted resurgence in the vitality of Delph itself. A raising from the dead (again? some might say) of Delphi.

Delphi Prism and DataSnap Server Method Stream Parameters

Stream Parameter Types

This table shows the parameter types and return types in the stream samples. The columns under “Server Parameter” describe the parameter or return type in the Delphi DataSnap servers methods. The “Delphi Type” column shows the corresponding type used in the Delphi client. The “Delphi Prism Type” column shows type used in the Delphi Prism client.

Code Samples

The following code samples show the differences between calling server methods with DbExpress in Delphi compared to ADO.NET in Delphi Prism.

key-feature of NativeDB

Direct database access is the single most important
key-feature of NativeDB. NativeDB skips both the Borland Database Engine (BDE),
the Microsoft's Open Database Connectivity (ODBC) and OLEDB layers. Thus make it
easy to deploy, install and configure your end-user application. Instead it connects
directly to the database layer provided by the database manufacturer. The result is
high performance data-access, with less code overhead compared to any other standard
database interface available today. NativeDB is a set of components that can work
or link directly with your application executable. You don't have to worry about
BDE, ODBC, MDAC or OLEDB distribution, configuration and version conflict issues

How can XLSReadWriteII help you

  • Native access to Excel files. The user
    doesn't need to have Excel, or any other software installed.

  • XLSReadWriteII works as an invisible Excel
    workbook. All cell values are always accessible.

  • Stability. One of the major goals with
    XLSReadWriteII was to create a product that won't corrupt the Excel files
    and won't alter any data in the file.

  • XLSReadWriteII is easy to use. With the

    formatting interface, you can format cells through a
    cell object.

  • Fast and efficient support when you have
    any questions.

  • No runtime fees for the component.

  • Full source code included.

  • We released the first version of
    XLSReadWrite in 1998. At that time it was the only component that could
    read and write Excel 97+ files. We are commited to have the leading Excel
    solution for Delphi in the future as well.

Friday, April 24, 2009

How to Hook the Mouse to Catch Events Outside of Delphi application

Learn how to track the mouse activity even when your application is not active, sits in the Tray or does not have any UI at all.

By installing a system wide (or global) mouse hook you can monitor what the user is doing with the mouse and act accordingly.

Windows Hooks?

In short, a hook is a (callback) function you can create as part of a DLL (dynamic link library) or your application to monitor the 'goings on' inside the Windows operating system.
There are 2 types of hooks - global and local. A local hook monitors things happening only for a specific program (or thread). A global hook monitors the entire system (all threads).

The article "An introduction to hook procedures", states that to create a global hook you need 2 projects, 1 to make the executable file and 1 to make a DLL containing the hook procedure.
Working with keyboard hooks from Delphi explains how to intercept the keyboard input for controls that cannot receive the input focus (like TImage).

Let's Hook the Mouse ...

By design, the movement of the mouse is restricted by the size of your desktop screen (including the Windows Task Bar). When you move the mouse to the left/right/top/bottom edge, the mouse will "stop" - as expected (if you do not have more that one monitor).

Here's an idea for the system-wide mouse hook ....

If, for example, you want to move the mouse to the right side of the screen when it moves toward the left edge (and "touches" it), you might write a global mouse hook to reposition the mouse pointer.

You start by creating a dynamic link library project. The DLL should export two methods : "HookMouse" and "UnHookMouse".

The HookMouse procedure calls the SetWindowsHookEx API passing the "WH_MOUSE" for the first parameter - thus installing a hook procedure that monitors mouse messages. One of the parameters to the SetWindowsHookEx is your callback function Windows will call when there is a mouse message to be processed:

SetWindowsHookEx(WH_MOUSE, @HookProc, HInstance,0) ;

The last parameter (value = 0) in the SetWindowsHookEx defines we are registering a global hook.

The HookProc parses the mouse related messages and sends a custom message ("MouseHookMessage") to our test project:

function HookProc(nCode: Integer; MsgID: WParam; Data: LParam): LResult; stdcall;
mousePoint: TPoint;
notifyTestForm : boolean;
MouseDirection : TMouseDirection;
mousePoint := PMouseHookStruct(Data)^.pt;

notifyTestForm := false;

if (mousePoint.X = 0) then
Windows.SetCursorPos(-2 + Screen.Width, mousePoint.y) ;
notifyTestForm := true;
MouseDirection := mdRight;


if notifyTestForm then
PostMessage(FindWindow('TMainHookTestForm', nil), MouseHookMessage, MsgID, Integer(MouseDirection)) ;

Result := CallNextHookEx(Hook,nCode,MsgID,Data) ;

Note 1: Read the Win32 SDK Help files to find out about the PMouseHookStruct record and the signature of the HookProc function.

Note 2: a hook function does not need to send anything anywhere - the PostMessage call is used only to indicate that the DLL can comunicate with the "outer" world.

Mouse Hook "Listener"

The "MouseHookMessage" message is posted to our test project - a form named "TMainHookTestForm". We override the WndProc method to get the message and act as needed:
procedure TMainHookTestForm.WndProc(var Message: TMessage) ;
inherited WndProc(Message) ;

if Message.Msg = HookCommon.MouseHookMessage then
//implementation found in the accompanying code
Signal(TMouseDirection(Message.LParam)) ;

Of course, when the form is created (OnCreate) we call the HookMouse procedure from the DLL, when it gets closed (OnDestroy) we call the UnHookMouse procedure.

Download both projects to test and further explore the code.

Note: hooks tend to slow down the system because they increase the amount of processing the system must perform for each message. You should install a hook only when necessary, and remove it as soon as possible.

Castalia for Delphi

Great developers need great tools
Castalia transforms the Delphi programming environment into an amazing development platform. Castalia lives on the bleeding edge of IDE technology, giving you cutting edge tools to write better code faster, understand code more accurately, and improve code you've already written.

Thursday, April 23, 2009

How to Unit Test a Data Module

Data modules are unit tested in exactly the same way that you would unit test any other piece of code. That is, by refactoring the code to be tested in such a way as to separate it from code not relevant to the test. But since people occasionally seem to find this confusing, perhaps a practical example is in order.

Imagine that you have been given the task of fixing a bug in the following code. Before you fix the bug, you would like to write a unit test which shows the bug, in order to prevent regressions after you fix it.

procedure TMyDM.qryFooCalcFields(DataSet: TDataSet);
Minimum, PieceRate: Double;
Minimum := qryFooHOURS.Value / qryFooMINIMUM_WAGE.Value; //
oops; should be *
PieceRate := qryFooPIECE_RATE.Value * qryFooPIECES.Value;
qryFooEarnings.Value := Max(Minimum, PieceRate);

Now, tests which access a database are, generally speaking, integration tests rather than unit tests. But the code you’ve been asked to fix seems to be pretty wound up in database access. It doesn’t have to be, though. Let’s refactor:

function TMyDM.CalcEarnings(AHours, AMinimumWage, APieceRate, APieces: Double): double;
Minimum, PieceRate: Double;
Minimum := AHours / AMinimumWage; // oops; should be *
PieceRate := APieceRate * APieces;
Result := Max(Minimum, PieceRate);

procedure TMyDM.qryFooCalcFields(DataSet: TDataSet);
qryFooEarnings.Value := CalcEarnings(qryFooHOURS.Value,
qryFooMINIMUM_WAGE.Value, qryFooPIECE_RATE.Value, qryFooPIECES.Value);

Resist any temptation to actually fix the bug at the moment; we went to write a unit test which fails before making it pass by fixing the bug. Note that you can make CalcEarnings static, meaning you won’t even need an instance of the data module in order to unit test the function:

procedure TestMyDM.CalcEarnings;
Actual, Delta, Expected, Hours, Minimum, PieceRate, Pieces: Double;
Hours := 1;
Minimum := 7;
PieceRate := 1;
Pieces := 1;
Expected := 7;

Actual := TMyDM.CalcEarnings(Hours, MinimumWage, PieceRate, Pieces);

Delta := 0.0001;
CheckEquals(Expected, Actual, Delta);

There are aspects of this code which will be hard to unit test. I could, for example, pass the wrong field value into the refactored function. I can fix that by abstracting the data access layer, but it’s not a complete fix, since I might have the wrong data in the database. The fact is that unit testing can only catch so much, and integration testing will always be necessary, as well. The important point is that I have isolated the bug which I was asked to fix, and future regressions in that code will now be caught.

I wonder if some Delphi users are confused by the term "unit test." The word "unit" means something very different in Delphi than it does in the term "unit test." In "unit testing," the word "unit" refers to a single piece of functionality. In Delphi, a unit (reserved word) is a source code file. When you write unit tests, you do not have to test an entire unit at a time. Unit tests are for specific cases of using a specific function.

New Delphi Product Manager and R&D Manager

You probably have already heard, but people inside CodeGear have shifted their roles, including the Delphi Product Manager.

Nick Hodges, who was Delphi Product Manager is now Delphi R&D Manager. So rather than promoting the product to the community he's more in charge of shaping the future of the product. Having known Nick for many years, I think this is going to be interesting. You can read more on his blog post A New Role for Nick. Let me thank him for his work at Product Manager, that started in a period of turmoil but ended with the product back in a good shape, and wish him best luck for this new assignment. And if you need ideas about Delphi future, I'm ready to provide a few...

Mike Rozlog, who was JBuilder Product Manager, is now Delphi Product Manager (see his Changes in the Wind Part II). It won't be easy to get out of Nick's shadow, but Mike has a long-term relationship with Delphi as a product, and even if he's not such a prominent figure in the community right now, he can certainly gain that status and help the product go forward. He looks to have more of a "database/business" oriented perspective, and this can be very important to push Delphi it one of its fundamental roles (that of a client/server tool for business applications) that at times has been a bit neglected.

Chris Pattinson, who was heading the Quality Assurance team for RAD Studio (after some big troubles with quality in previous versions), has been appointed as Director of Quality Assurance for Embarcadero as a whole, supervising the development of both CodeGear and DatabaseGear products. Kudos to Chris for this key assignment, I'm sure he'll do well as he helped Delphi become a solid tool once more.

Well, let me thank the three of them for what they have done in the past at CodeGear and wish them all the best in their future CodeGear/Embarcadero roles. I hope to have time to chat to at least Nick and Mike at Delphi Live.

How to Right Align a Menu Item

In most applications all (top level) menu items are aligned at the =
left side=20
of the menu bar. I'm sure you have seen applications with at least one =
aligned on the right side. In most cases this was the "Help" menu item.=20

Here's how to align a form's Help menu item (for example) on the =
right side=20
of the menu bar, in Delphi applications.=20

  1. Add a TMainMenu component to a form (Form1)=20

  2. Add several (top level) menu items (with sub items)=20

  3. Have a menu item named "HelpMenuItem"=20

  4. Use the code below in the form's OnCreate event.=20

  5. Run the project ... note that the "Help" item is aligned on the =
    right side=20
    of the menu bar.

procedure TForm1.FormCreate(Sender: =
mii: TMenuItemInfo;
Buffer: array[0..79] of=20
MainMenu :=3D=20

//GET Help Menu Item=20

mii.cbSize :=3D SizeOf(mii) ;
mii.dwTypeData :=3D =
mii.cch :=3D SizeOf(Buffer) ;
HelpMenuItem.Command, false, mii) ;

//SET Help =
Item Info

mii.fType :=3D mii.fType or=20
HelpMenuItem.Command, false, mii) ;

Optimal project settings

What will we talk about?
Ok, let’s see, what options do I mean: open your project’s options and take a look at “Compiling” and “Linking” pages (those pages are called “Compiler” and “Linker” in old Delphi versions):

“Compiling” page in project’s options (D2009)

“Linking” page in project’s options (D2009)

On “Compiler” page we are interested in “Stack Frames”, group of “Debug information”, “Local Symbols” and “Symbol reference info”, “I/O Checking”, “Overflow checking” and “Range checking”. On “Linking” page - “Map file”, “Debug Information” (this option is known as “Include TD32 debug info” in previous Delphi versions) and “Include remote debug symbols”.

Let’s see what these options do. And then - what are the best settings for them and why. We will have in mind the scenario of usual application and EurekaLog-enabled application.

Besides, the project settings can differs for debug and release targets - i.e.: do you compile application for your self (for debugging) or for end-user deployment. There are settings profiles (Debug and Release) in new Delphi versions. You can specify individual settings for each profile and just toggle between them before compilation. Older Delphi version have only one global profile, so you have to change every single option manually.

Please, remember, that you need to make a full build (and not just compile), if you’ve changed one of those options.

What do these options mean?

The most important settings are set of options “Debug information” (”Compiling” page), “Local Symbols” and “Symbol reference info”.

The program is set of machine’s (CPU) instructions - which are just numbers. The source code is a text file. The question: how does the debugger know, when he needs to stop, when you are setting a breakpoint in your source? Where is correspondence between raw numbers and human-readable text?

This correspondence is a debug information. Roughly speaking, the debug information is set of instructions like: “the machine codes no. 1056-1059 correspond to line 234 of Unit1″. The debugger works thanks to such debug info.

And these options? They controls the generation of debug information for your units.

The debug information is stored in dcu-files together with its compiled code. I.e. the very same Unit1.pas can be compiled into different dcu-files (with or without debug information). The debug information increases compilation time, size of dcu-files, but it does not affect size or speed of resulting application (i.e. debug information is not included into application).

There are cases, when you want to have debug information in your files or (at least) near them. For example: if you are going to do remote debugging or debugging of external process. OR if you want to have human-readable call-stack in your exception diagnostic tool (EurekaLog).

You can embed debug information into your files by two ways: either you play with project’s options (using “Map File”, “Debug information” (Linker)/”Include TD32 Debug info” or “Include remote debug symbols” options) OR you use some sort of expert (for example, EurekaLog or JCL), which injects debug information in his own format into your executables.

  • “Debug information” – that is the debug info itself. You should enable this option if you want to do step-by-step debugging or have call stacks with names. EurekaLog enables this option automatically for you. But if you mess with your project’s settings…

  • “Local symbols” – it is “addon” for usual debug information. This is correspondence between program’s data and variables names in source code. You need to enable this option if you want to see and change variables. Also, call stack window in Delphi can display function’s arguments with this option.

  • “Reference info” – this is additional information for code editor, which allows him to display detailed information about identificators. For example: where were a variable declared.

Those options are very cose related and usually there is no need to enable or disable only one of them - they are switched together.

  • “Use Debug DCUs” - this very important option switches compilation between using debug and release versions of standard Delphi’s units. If you were attentive, then you could notice that real pas-files in Source folder are never used during compilation. Instead, the precompiled files (in dcu) are used. They are taken from Lib or Lib\Debug folders. This trick greatly decreases compilation time. Because dcu can be compiled with and without debug information - there are two sets of dcus in Lib folder. By toggling this option you’ll specify which one Delphi should use for you. If you switch this option off - then you won’t be able to debug standard Delphi code or see detailed call stack for it.

  • “Stack Frames” - this option controls stack frames generation. If the option is off then stack frames won’t be generated unless they are needed. If the option is on - then stack frames will be generated always. Stack frames are used for frame-based stack-tracing method (the raw-tracing method do not require stack frames). I.e. it is used for building call stack. In usual application stack frames are generated almost everywhere.

  • “Range checking” - this is a very useful helper for debugging problems with array-based structures. With it, compiler will insert additional checks (for strings, arrays, etc), which checks the correctness of indexes. If you (by mistake) pass an invalid index - the exception of type ERangeError will be generated. And you can find your error. If the option is off then there is no additional code. Enabling this option slightly increases size of your application and slows down it execution. It is recommended to turn this option for debugging only.

  • “Overflow checking” - it is somehow similar to “Range checking”, except checking code checks overflows in arithmetic operations. If result of operation is not suitable for storage variable - then exception EIntOverflow will be raised. For example: we have a byte variable, which holds 255 now. And we add 2 to it. There should be 257, but it can not be stored in byte variable, so real result will be 1. That is integer overflow. This option is actually rarely used. There are three reasons for it: different code often depends on it to be turned off. That means that enabling this option will break such code. Secondly: usually you work with Integer, and your logic rarely involves dealing with values on Integer’s range border. And third: arithmetic operations is very common code, so adding additional checks to every operation can slow down your code significantly.

  • “I/O Checking” - this option is used for working with “files in Pascal-style” (AssignFile, Reset, etc), This feature is deprecated and you shouldn’t use these routines and therefore you don’t need to touch this option.

  • “Map file” - by enabling this option you tell the Delphi’s linker to create a separate map-file along with your executable. Map file contains human-readable representation of debug information. Different settings for this option controls the detalization level of output. Usually, there is no need to change it to anything, which differs from “Off” or “Detailed”. The map-file is used by various tools as primary source of debug information. For example, EurekaLog automatically turns this option on and uses map-file to create a debug information in its own format and then injects it into application. That is why you rarely need to change this option manually.

  • “Debug Information” (Linker)/”Include TD32 debug info” - this option embeds debug information for external debugger in TD32 format into your application. You may need this option if you use “Run”/”Attach to process” and Delphi can not find debug information. Note, that size of your application can increase 5-10 times by enabling this option. So, it is not good choice for storing debug info in release version - it is better to use EurekaLog (or any other source of debug information, which is supported by EL; for example - JCL).

  • “Include remote debug symbols” - very similar to previous option, but this creates a rsm-file with debug information for Delphi remote debugger. You need this option, if you want to do remote debugging.

Note, that these options can be enabled not only globally, but also separately for each unit (several options can affect single routines or, even, lines of code). This is done by using usual compiler directives (you can see them in help - by pressing F1 while you stay in project’s options dialog). For example, “Stack Frames” is controlled by {$W+} and {$W-}.

So, by summarizing all this info, we can give a recommendations for different cases. They are written down below. Settings, which differs from defaults are marked in bold (i.e. you should toggle them manually).

Usual application, without EurekaLog

Base settings for each profile

All debug options (”Debug information” (Compiler), “Local symbols”, “Reference info”) does not affect the resulting application and do not disturb us - so you usually should keep them always on.

“Use Debug DCUs” - set it as you like (depending on: “do you want to debug standard Delphi code or not?”).

There is no need to turn on “Stack Frames” option.

There is no need for map-files.

Debug profile

Turn on “Range checking” and (optionally) “Overflow checking”.

“Include TD32 debug info” - enable it only if you use “Attach to process” while debugging.

“Include remote debug info” - enable it only if you want to use remote debugger.

Release profile

Turn off “Range checking”, “Overflow checking”, “Include TD32 debug info” and “Include remote debug info”.

EurekaLog-enabled application

Base settings for each profile

All debug options (”Debug information” (Compiler), “Local symbols”, “Reference info”) should be definitely turned on. Otherwise, call stack functionality won’t be working.

There is no need to turn OFF “Stack Frames” option.

Generation of map-file should be turned on, but EurekaLog’s expert takes cares of it.

Debug profile

“Use Debug DCUs” - set it as you like.

Turn on “Range checking” and (optionally) “Overflow checking”.

“Include TD32 debug info” - enable it only if you use “Attach to process” while debugging.

“Include remote debug info” - enable it only if you want to use remote debugger.

Release profile

Turn on “Use Debug DCUs” option.

Turn off “Range checking”, “Overflow checking”, “Include TD32 debug info” and “Include remote debug info”.

What can go wrong if I mess up with my settings?

Well, that means, for example, impossibility of debugging (say, missed information for remote debugger of turned off “Debug information” (Compiler) option), large size of application (for example - you forgot to turn off “Debug information” (Linker)/”Include TD32 debug info”), slow execution (for example: application was compiled with debug code), missed or partial call stacks in EurekaLog (for example, you managed to turn off “Debug information” (Compiler) or you do not use “Use Debug DCUs”). In rare cases there can be work/not-work difference (for example, swithing on “Stack frames” can slightly decrease maximum depth of recursive algorithm). There are few other minor issues.

BTW, if you develop a component - do not forget that Delphi have two sets of dcus. Generally, toggling debug options does not affect interface and implementation parts of dcus. So, different versions of dcus are compatitible with each other. But it is not always so - code may use conditional defines. That is why it is possible to have binary incompatible versions of dcu-files of the same source code (and here is a real-life example with D2009 and InstanceSize method).

For this reason, you should also create a two sets of dcu-files: first one should be compiled against usual dcu (”Use Debug DCUs” turned off) and the other one - with debug dcu-files (”Use Debug DCUs” turned on). BTW, it is not important, how you’ll set “Debug information” (Compiler) option for your component (of course, it is only from POV of binary compatibility).

Tuesday, April 21, 2009

Profiler for Delphi

is a tool to measure the runtime of programs written in Delphi Object Pascal. If an application is too slow, ProDelphi gives the necessary information to optimize it.

The principle of source code instrumenting , a sophisticated correction algorithm and the unique granularity of 1 CPU cycle guarantee to get correct measurement results . Source code instrumenting guarantees that always every part of an application is measured.

This is contrary to sampling profilers which give random measurement results , the exact execution time of a procedure can not be determined (see also profiler types ). To compare the measurement accuracy of ProDelphi with any other profiler, a profiler tester is supplied in the download area.

Because of the outstanding low measurement overhead even time critical applications can be measured.
I ntegration into the Delphi IDE, a call graph and a handy viewer guarantee a fast optimization process.

Writing a RemObjects SDK for .NET Server and Mac Client

In this article, we will walk through the process of creating a fully working client and server application using RemObjects SDK, with the server written in .NET and the client written for Mac OS X.

The article assumes you have Visual Studio and the latest RemObjects SDK for .NET installed in a Windows VM or on a separate PC on your network, and the Xcode 3.0 with the latest RemObjects SDK for OS X on your Mac.

The .NET Server

Start off by going into Visual Studio to create your server application. Choose FileNew Project from the menu and locate the RemObjects SDK node underneath the language of your choice (C#, Delphi Prism or Visual Basic .NET) - for our example, we’ll choose C#, as most people are familiar with that. Don’t worry, you'll be writing literally two lines of code for this, which easily translate into Delphi Prism or VB if necessary).

For the purposes of this article, choose the Windows Forms Server option, pick a destination folder and name your project “MyROServer”. Click OK to continue.

Next, you will be presented with the New RemObjects SDK Server wizard. Since we will be creating our client application in Xcode, on the Mac side, you can uncheck the Also create a matching client application checkbox. Beyond that, all the default settings should be good, although in a real-life project you may want to click into Advanced Project Options to configure names for service and library, or choose different communication channels. Once again, click OK to have the project created.

On the right-hand side of Visual Studio, you now see your project in Solution Explorer. Next to the usual source files in your language of choice (here .cs), you will see a .RODL file which contains the service definitions for your server. Double-click the file (or press the red RemObjects SDK icon in the toolbar above) to edit the RODL.

Monday, April 20, 2009

Web Service Projects and Sessions

Apart from ASP.NET Web Service solutions using the Web Site target, the Feb 2009 update of Delphi Prism introduced Web Projects that also support ASP.NET Web Services. Let's use this new way as basis for the next example that demonstrates the use of the EnableSession attribute.
First of all, do File New, but this time do not select File New - Web Site, but select File New - Project instead. As one of the New Project targets, we can select Web and then pick an ASP.NET Web Application, as can be seen below.

Note that we can select the (minimum) version of the .NET Framework for this project (3.5 in the screenshot above), and unlike the Web Site, we cannot select HTTP as location, but need to specify a normal location for the Web Project. When we deploy the project, we can select a HTTP location as we'll see later in this section. For the new example, select MyWebService as Name for the solution as well as the project. Note that this will produce an ASP.NET Web Application, and not directly an ASP.NET Web Service, but this can be "fixed" after the project is created.
After we click on OK, a new ASP.NET Web Application project called MyWebService is created (in a solution which is also called MyWebService). The Solution Explorer shows the project with a Default.aspx web page, as well as a Global.asax (for the ASP.NET application object) and a web.config file. We don't need the Default.aspx and related source files, so open the node in the Solution Explorer, select the three files Default.aspx, Default.aspx.designer.pas as well as Default.aspx.pas, right-click on the nodes and select Remove. In the dialog that follows, you can click on Delete to permanently delete Default.aspx (and the related .pas files).
Without the Default web page, the project is only ASP.NET and needs another main item. Right-click on the MyWebService project node, and select "Add - New Item. In the dialog that follows, we need to go to the Web category and select a Web Service.

Change the name of the Web Service from WebService1.asmx to Service.asmx and click on Add.
First, modify the namespace attribute again,

[WebService(&Namespace := '',
Description := 'This web service is a new demo Web Service<br>' +
'generated by <b>Delphi Prism</b> for ASP.NET 2.0 or higher')]
[WebServiceBinding(ConformsTo := WsiProfiles.BasicProfile1_1)]
// To allow this Web Service to be called from script, using ASP.NET
// AJAX, uncomment the following line.
// [System.Web.Script.Services.ScriptService]
Service = public class(System.Web.Services.WebService)

Note the new comment about the System.Web.Script.Services.ScriptService, to allow the web service to be called from JavaScript, a topic for a future article.


Then, we should remove the demo method HelloWorld, and add four new WebMethods.
All of them with the EnableSession attribute set to true, to enable the use of the ASP.NET session object for these four web methods:

Service = public class(System.Web.Services.WebService)
[WebMethod(EnableSession := true)]
method Remember(const Name,Value: String);
[WebMethod(EnableSession := true)]
method Recall(const Name: String): String;
[WebMethod(EnableSession := true)]
method Forget(const Name: String);
[WebMethod(EnableSession := true)]
method Amnesia; // forget all

Press Ctrl+Shift+C to generate the skeletons for the four web methods, and implement them as follows:

method Service.Remember(const Name,Value: String);
Session[Name] := Value

method Service.Recall(const Name: String): String;
Assigned(Session[Name]) then
Result := Session[Name].ToString
else Result := ''

method Service.Forget(const Name: String);

method Service.Amnesia;

The implementation of the four web methods is relative straightforward, using the ASP.NET Session object to store the name-value combinations, or clear them (or clear everything, by calling the Session.Abandon method).

We can now run the server in a browser using Debug Start without Debugging.
This will start the ASP.NET Deployment Server (at http://localhost:2743 on my machine), showing the web service on our localhost.

If you didn't selected Service.asmx as Start Page (by right-clicking on it in the Solution Explorer), you may need to manually add Service.asmx to the URL in the browser. Then, you can use the browser to test the web service. For example, you can click and execute the Remember method and make the web service remember a name-value combination, which you can then use to test Recall (or Forget). This should all work just fine, preparing the way for the actual web service client.

Although you may want to deploy the web service before releasing the client, we can already create a client application and allow it to consume the web service part of the same solution.
First, let's create a new Windows Application, call it WindowsClientApplication.