CoolBasic Story

First code lines for CoolBasic were written 15 years ago. Read the whole story on our website.

(click the image below)


CoolBasic Story

New website scheduled for the Alpha release

Let’s face it, the current CoolBasic website is pretty horrible. In addition to the fact that it’s completely outdated, it’s also ugly!

The placeholder CoolBasic website of failure

To be honest, the (current) website was supposed to be a short-time placeholder. Well, CoolBasic was supposed to be released years ago, too. But not all things end up turning out as planned in life. For CoolBasic, there was a long lull with the project, and everything kind of grew out of trend in the meanwhile. Tech evolved. World changed. Game industry changed. Game tooling changed. And now we’re in a situation where once great products such as BlitzBasic and Dark Basic have been unable to change and improve with the world. New, more modern tools entered the market and took over (such as Unity that’s got a lot of traction recently), thanks to their more competitive, technological advantage. Ironically, this is also true for CoolBasic. So when we come back, we’ll have to offer modern solutions! The Story of BlitzBasic has taught me a lot.

Along with the entirety of CoolBasic, the website needs a complete overhaul and better design. However, as the development of the new CoolBasic is still on-going (and not quite ready for public alpha just yet,) I don’t really want to waste time on a temporary website facelift. So let’s launch it with the actual CoolBasic alpha and go all-out with it!

CoolBasic is an interesting software project because it involves so many different sub-projects: We have some tech-heavy stuff like the compiler and runtime virtual machine. Then we have some UI coding for the IDE and additional tools. Then there’s the game engine. And also the whole set of services on the web, like the website or the future coming online documentation system. If I should temporarily grow tired on one aspect, I can switch to something else. And recently, I’ve been active with the new website.

Although not planned for immediate launch, I thought that now would be a good time to make some preparations in terms of establishing the design and foundation for the future site. How should it look and feel? What kind of pages and structure it should have so that I can deliver the intended message as effectively as possible? What is its purpose? What information should it contain and what topics are better handled on the forums and just provide a link for? And then there’s the technical topics to consider: Should I embrace HTML5 and CSS3 exclusively or still retain compatibility with other browsers?

Developer as I am, I decided to go with full HTML5/CSS3/responsive design (mainly because it’s new and fun, but also because I believe that by the time we go full public the majority of our target audience do have these things covered.)

Responsive design makes the website’s layout adjust accordingly to the available viewport dimensions. For desktop use, this is the size of the browser window. For mobile devices, it’s the size of the screen. Responsive design has some branches of interest: The common design principle complies to all devices and treats them evenhanded; the content is being laid out in such a way that from the browser’s view it’s like: “When the viewport is shrinking, where should I arrange all this content to.”

Then there’s the mobile-first approach that assumes that the site is mainly viewed on a phone or tablet. Mobile-first content is designed to appear in specific order most of the time. From the browser’s view it’s like “When the viewport is growing, I should scale everything up and larger.” For CoolBasic website, I’m going with the generic model.

With these goals in mind, off I went and came up with the basic content structure and a strategy how I would split the information across the pages. The main purpose of the new website is to introduce the CoolBasic alpha release and make all necessary information available to those who want to test it. The site’s lifespan is designed to last through alpha, beta, and perhaps even carry on at the final release.

Having working with it for a few days now, it’s starting to shape up. I’m happy with how the responsive layout re-arranges content blocks as the viewport size changes. I’m happy with the color scheme too, as it now gives out a more “professional” impression, but still has the CoolBasic ish vibe to it. Coherent design: single theme color; accent color; clear typography; image arrangement; symbol icons; CSS3 transitions; proper HTML5 markup.

Wasn’t easy, though. Not having done much web design for a couple of years, I definitely felt a bit out of touch with how fast web technologies have gone forward. Especially all that responsiveness – not an easy task to implement for pages with multiple columns, images and more complex widgets. CSS media queries are rather simple as a concept, but in practise can prove to be difficult to pull off when you have, say, site logo and site menu occupying the same “row.” I’m using the Twitter bootstrap as the basis of the layout and Font Awesome for glyphs.

At the moment, I consider the layout if not then close to final. I have all headers in place and know what I’m going to be writing about in each planned paragraph within the website. Currently I’m using placeholder images and the paragraphs only contain lorem ipsum, though.

In short, yes a new website will be coming, but not until we’re publishing the alpha. I’m not going to reveal it just yet, but maybe a bit closer to the release 😉

Extending CoolBasic with your own DLLs

One of the shortcomings of the old CoolBasic is its limited extensibility. The original design didn’t really support utilizing external code. If you wanted to create a “library” it’d have to be written in CoolBasic and then you’d have to give away the source code. The end user would then include those .cb files in the beginning of their programs.

As CoolBasic is interpreted, the need to execute code with native speed soon become apparent. To relief this limitation, CallDll was introduced. You’d allocate memory for the in parameters and the return value. It’s not a very resourceful way to support or use external DLLs, and I would have liked to implement a proper syntax like Declare Function GetTickCount Lib "kernel32" Alias "GetTickCount" () As Integer. I never did, though.

Something like that may still come in for CoolBasic Classic. It’s on the TODO list, and both the compiler and runtime have a preliminary support for it already in place. But before diving deeper into that, there’s an alternative way to integrate external libraries so that you can use them directly in code. You’d call them just like any other user-defined function, and you don’t have to declare them before use.

Functions owned by the Host

The CoolBasic Classic language itself doesn’t include functions such as LoadImage or PlaySound. They’re provided by the engine that’s interpreting the compiled program. The design philosophy here is that CoolBasic Classic is just a language, and there can be any number of game engines that can run CoolBasic code. One engine may offer a completely different set of commands than the other (and this is also where project types come into play, more about that later.)

The engine basically gives a list of commands available in it. This list of commands is then fed to both the code editor (so it can syntax highlight,) and the CoolBasic Classic compiler (so it recognizes them.) Within the executing engine, CoolBasic functions are called in a certain way, and functions provided by the hosting engine are called differently (since they’re native code.)

A host function defines an ID that is presented to the CBC compiler. Let’s say a function “PlaySound” has an ID of 15. When the compiler emits the CallHost instruction it also attaches the number 15 on it. At runtime, the virtual machine will call a delegate who had defined an ID of 15.

Nice, so that’s how we call native code from a CoolBasic program. But there’s a little design flaw here… What if we wanted to import functions from another DLL and they had ambiguous IDs.

Introducing function signatures

A better way to refer to a function is by signature. The signature comprises of the function’s name, and the number and types of its parameters (but not its return type.) Based on the supplied arguments, the compiler determines exactly which function to call (and ambiguous function candidates would generate a compile time error.) Therefore, it’s sufficient to just issue “Call a method with this signature” at runtime because the compiler has already enforced that there will be no ambiguity.

So I spent some time refactoring the CoolBasic Classic compiler and the Cool VES virtual machine to now operate with signatures instead of arbitrary function IDs. When the engine is loading, it inspects the available host functions and gathers their signatures. The signatures along with the execute delegates are stored in a dictionary. If there are multiple functions with the same signature, it throws an error. This ensures that the call will be explicitly directed to the same function that the compiler determined. When the interpreter is calling a host function, we’re going to peek into the signature dictionary and execute the appropriate delegate.

This approach makes it possible to have multiple sources of host functions (as long as the provided functions’ signatures don’t collide.) The primary source is the game engine itself. But we can also dynamically inspect the other DLLs, within the same directory, for any valid host functions and import those! Think of it like this: the linking process is done at runtime (rather than after compilation) when the CoolBasic program is loaded into the engine’s memory.

All you have to do is drop those DLLs in the same directory as the executing engine. They’ll run at native speed when executed by the CoolBasic program.

Implementing host functions

It’s reasonably easy to implement a host function that is callable from a CoolBasic program. All you need to do is create a managed DLL in any CLR compliant language, such as C# or VB.NET, that contains methods that satisfy this delegate:

[csharp]delegate void CommandAction(StackEntry[] stack, ref int stackIndex);[/csharp]

Let’s create a DLL that introduces a Sleep command that you can then use in your CoolBasic programs. It takes one integer as parameter, the number of milliseconds that the program will wait until continuing execution.

I’ll create a new Class Library project in Visual Studio, name it “MyCoolExtension” and then create a single class “MyCommands” in it. I then add a reference to CoolVES.dll and write this code:

[csharp]namespace MyCoolExtension
{
using System.Threading;

using CoolVES;
using CoolVES.Integration;

public class MyCommands
{
/// <summary>
/// Halts the program until the specified amount of milliseconds have elapsed.
/// </summary>
/// <param name="stack">The stack.</param>
/// <param name="stackIndex">Index of the stack.</param>
/// <remarks>Sub Sleep(time As Integer)</remarks>
[Command(Id = 1, Name = "Sleep", ReturnType = DataType.Void)]
[CommandParameter(Index = 0, Name = "time", DataType = DataType.Int32)]
public static void Sleep(StackEntry[] stack, ref int stackIndex)
{
var time = stack[stackIndex–].AsInteger;

Thread.Sleep(time);
}
}
}[/csharp]

Build this and drop it to the same directory as the final game.

Note that you’ll have to handle stack manipulation manually. This is a speed optimization. It’s nothing too complicated though; you’ll simply access the stack array and decrease the pointer for as many times as you have parameters in your function. If you specify something else than Void as the return value type, you’ll also have to assign a value onto the stack at the end, too.

There’s one additional step that needs to be done in order to have the compiler support this new command. There will be a tool with which you can generate a “definition file” out of this DLL. The file contains metadata that describes the functions and constants contained within your library. You only have to do this once. This file is provided to the CoolBasic Classic compiler in the command line.

Definition file handling will be done automatically behind the scenes by the code editor. Also note that you only need the file in development time; metadata doesn’t (and shouldn’t) be present in the final game output directory.

As a library developer, when you want to distribute your work, simply provide the compiled DLL along with the generated metadata file. To use the library, the end user would make a reference to the metadata file in their code editor. This would make all new functions and constants available for syntax highlighting and compilation.

If one can programmatically generate the metadata out of a compiled DLL, why would we need a separate metadata file? Can’t we just inspect the referenced additional DLLs for this information on-the-fly when compiling? Well, no… that would create a strong dependency between the CoolBasic Classic compiler and CoolVES virtual machine. The idea is to decouple CoolBasic Classic as a language from any game engines (or execution engines, rather.) The metadata format has been designed to be quite flexible and can describe much more complicated type information than what CoolVES provides currently. In other words, the metadata can span across current and future coming technologies.

All in all, I think this design addresses the requirements in a neat way:

  1. You can achieve machine code speed
  2. It’s easy to create these DLLs
  3. You can harness the full power of the .NET and Mono frameworks
  4. It’s easy to consume these DLLs. They integrate to editor, compiler and runtime automatically. No need to declare the functions in code before use

It doesn’t allow you to call unmanaged DLLs without creating a managed wrapper, though. Perhaps that’s better handled with a Declare Function statement…

VERY TEMPORARY

Today I’m going to talk about some old stuff. Don’t worry though, most of you haven’t seen it yet. A year ago, at our traditional summer meeting, I demoed some very early and experimental CoolBasic builds. The reason I want to show code and screens this old, is so that it’s easier to explain about how things have since changed in the future coming blog posts.

Back in 2013 I actually had a working environment that consisted of a code editor, CoolBasic compiler, and a debugging runtime. You could write CoolBasic code, pass it to the compiler and finally execute it. It couldn’t render game graphics or play sounds, but the very basic text input and output was in place. At that time I focused on code execution rather than game libraries. Control structures, strings, arrays, types, operators, all that kind of stuff. As a result, the demo was probably very boring to watch, but hey, at least it was executing something!

The Compiler

The compiler was naturally the number one priority to get done. In refreshment, it’s written in C#, running on .NET CLR and Mono, and is a standalone console application so it can be called from anywhere. It wasn’t feature complete back then (for example, Select…Case was missing,) but it could handle most control structures and generate the final bytecode.

Compilers aren’t very exciting, though. All you need to know is it’s now faster, more feature-rich, and hasn’t got a function limit.

A VERY TEMPORARY Editor

The VERY TEMPORARY editor

Before you go to the forums and start complaining about how awful it looks, mind the window title. Guess why it’s called “VERY TEMPORARY EDITOR”. The caps are intended. This will *not* be the editor that will ship – don’t worry. I promise.

In short, I just wanted to test a) how easy it is to integrate the compiler into an IDE, and b) could I perhaps use AvalonEdit as the editing control as opposed to the commercial Actipro SyntaxEditor. I’m already quite familiar with the Actipro component (had a chance to use it in a work project) and I know it is the state-of-the-art option, but perhaps that would be a little bit of an overkill for my purposes.

As it turns out, AvalonEdit is just perfect, at least for the starters; I can always upgrade to Actipro later. AvalonEdit offers configurable syntax highlighting out of the box, supports code completion popups, and is generally fairly extendable. Syntax highlighting definition is loaded from an external XML file, and the list of commands provided by the game engine is imported from a special “framework definition file” that I can generate automatically off a compiled executable or DLL (via reflection.)

It was pretty easy to invoke the compiler, have its output written in a textbox, and parse off any errors it would report. All in all, a successful little test editor.

A VERY TEMPORARY Runtime

The VERY TEMPORARY runtime

That’s not a real game engine. It’s actually just a normal WPF application powered by the new Cool VES virtual machine. In fact, the only available commands are Print, Input, and Timer (for benchmarking.) The intention was to establish a simple “console” which would provide basic input and output so that I could test that the virtual machine doesn’t corrupt the virtual stack or leak memory at any point.

For this reason there are some debugging features available. At any time, I can click this cute pause button:

The pause button

This will halt the virtual machine that’s executing the code. While paused, this UI becomes available:

Metadata: symbols

Debug info: metadata

This view lists all functions and variables and their types. Symbol information is needed for a number of reasons. Firstly, the debugger can emit more meaningful call stacks when the functions’ names are known. Secondly, the runtime can perform proper clean-up when returning from a function as it knows which resources are stored in heap.

Bytecode

Debug info: disassembled program

This listing represents the current program in its “disassembled” form. Here I can see that the program was decoded properly and matches with what the compiler spat out.

Managed resources

Debug info: managed resources

Remember how LoadImage would return a handle that you’d then store into a variable for later use? These handles are called “Resources” internally in Cool VES. For more efficient memory management Cool VES keeps a list on what has been loaded. It’s not a real (unmanaged) memory pointer, but a reference to an internal object that also contains metadata of that object.

Interestingly, also strings are managed resources and they, too have handles that are manipulated every time a string is stored in and consumed off the stack.

Call stack and locals

Debug info: virtual stack

If I want to see the low-level state about the executing program, this is the view I’m interested in. I can inspect the values of each variable, for each function within the call stack. This information, of course, would be presented in a more intuitive way in a real debugger.

So that’s how things were a year ago. Nowadays the compiler is pretty much feature complete, and the real code editor is in the works. I also did some engine experiments based on the DirectX10 interface (initially on DX11, but for whatever reason DirectWrite that I use for text rendering isn’t easily usable in it.) More on these topics in future coming posts.

The story of BlitzBasic

BlitzBasic logo

3 months ago BlitzPlus went open-source. A few days ago Blitz3D followed. When I first read about BlitzPlus being made available on GitHub I kind of silently congratulated them for making that decision. To me it wasn’t surprising at all because of what has happened to the indie game industry and hobbyist game making in general for the past few years. The trend is strongly leaning towards mobile platforms – at least for 2D games. On PC you’d better be going with 3D nowadays – unless your 2D game is either one of those more complex genres, or when the touch interface isn’t enough. In other words, BlitzPlus had become somewhat obsolete (being 2D only and desktop only); it simply doesn’t answer to the modern game development demand. So in my opinion, it made perfect sense to go open-source since not them (The Blitz Research Lab), or anyone else would be making any real money out of it – especially now that they have two competing products of their own (i.e. BlitzMax and Monkey).

Originally, BlitzPlus was released after Blitz3D. I never understood why they wanted to do that – why develop a new product when they could’ve just brought the new features (native UI commands) into the existing product, Blitz3D. It would’ve strengthened the brand and not confused new customers. Not just that, but then they introduced BlitzMax whose main selling points were cross-plaform support (for the desktop) and language syntax enhancements. These changes actually do warrant a new product, though, because all of the breaking changes. From developer’s point of view, however, cross-platform support is a nightmare to establish, improve, and maintain; you’ll have to provide the base functionality for all supported platforms now, which will essentially multiply development time by an order of magnitude. This also explains why BlitzMax launched without 3D. They simply didn’t have time to do it, especially when the community expectations were “at least Blitz3D level of functionality”.

Thankfully, BlitzMax is rather modular. To this day it has been extended with countless libraries developed by the community. Much more than a one-man company can produce on its own. There was even a community driven 3D module in development. Sadly, it was shut down before release because Mark Sibly announced the “official 3D module” with some teaser screens and info. I believe there could’ve even been a small demo. Well… that official 3D module never came out. I don’t know what happened, but I guess developing a full featured 3D engine that works on all supported platforms the same way took so much time that Mark lost interest before it was finished. Just kidding.

A more reasonable theory is that back then the mobile revolution was happening, thanks to iPhone, and Mark saw the huge market potential. So maybe the point of interest just shifted. As BlitzMax already had the functionality that met the mobile game development needs (which is to create 2D games as easily as possible), why not utilize that. So BlitzMax now needed to support new platforms. But if you want to run games on a phone it will place some limits to the engine. A system that was designed to execute on a desktop was too heavy for a phone. This is when Mark first hinted about “bmax2” on his blog. The new system would have to be “lighter” which in itself would already require lots of code rewrite. So why not bring yet more syntax enhancements and make it a new product. Monkey was born.

Monkey targets 12 platforms, including Ouya and PSM. That’s pretty impressive. The only way to make this happen in a reasonable way is to translate a program written in the Monkey language (basically a BlitzBasic derivative) into some well-known other language and compile that for the target platform. Based on what Mark has written on his blog from 2011 onwards there has been quite some technical challenges to get everything to work on such a broad set of platforms. Fighting environment related bugs rather than having fun implementing new features into your product can get really tiresome after a while. And I think that explains why the development of Monkey has been slow. It’s still plagued by a large amount of little bugs. These are nasty because they seem so minor in the users’ eyes and if not fixed in a timely manner they will begin to lose interest and faith into the product’s quality. In addition, new potential customers first scan the forum for some general information, then find about the endless list of these small unfixed things, get a negative first impression, and move on in search of alternatives.

All these products. All these years. Yet there is no 3D functionality apart from Blitz3D. And B3D is still based on the ancient DirectX7. Even mobile games nowadays heavily utilize 3D because the sharp improvement in hardware performance in modern phones and tablets. What’s more, is that the explosion of mobile market has caught the interest of “more-than-just-indie” level game development companies. These shops have professional 3D artists, animators, programmers, composers and designers and release some really nice looking 3D titles for mobile devices. Mobile games are no longer all about 2D. And if Blitz Research Labs want to be offering a “serious” development tools that target these platforms, they’ll need a 3D engine and fast! Unity is already taking over as the “de facto” environment to develop 3D games for the mobile. You’re already late!

There is a roadmap for Monkey that hints about “mojo3d”.

Mojo3d is already in development – although it may not be called that once it’s finished. The basic idea here is to provide a simple, immediate mode, low level 3D API that people can use to write higher level stuff with, for example, a backwards compatible Mojo driver capable of enhanced effects/custom shaders etc. Currently, I am aiming for compatibility with all targets capable of gl2/gles2/d3d11. More on this is it develops…

It’s still not out, and reading that topic gave me the impression that it’s not going to be anytime soon. As I said earlier, the expectations are “minimum B3D level of functionality”, but this time for all target platforms. Mark, haven’t you learned anything? You really want to try and make it available on all 12 target platforms, including HTML5?! Even Unity doesn’t do anything like that (there’s a browser plug-in for rendering instead). This is a MASSIVE job to do; you can’t handle it alone. Not within a reasonable time anyway. At the same time your competitors are rocking away with their 30+ member development teams. Technology soars up faster than ever before. If you do this the engine will be outdated right from the get-go.

3 months ago, Mark dropped a bomb. He said in the roadmap topic:

Alas, there is no ETA on ‘next up’ Monkey features.

The sad truth is, Monkey sales are not good and it’s likely I will have to find some kind of ‘supplementary’ income soon – not easy for a guy who’s never held down a real job! Well, since I was 18 anyway. This is not likely to improve productivity, but I do plan on at least continuing to provide updates/fixes and improvements to current Monkey, pretty much as I have been doing recently. But ‘Monkey 2’ (as it was evolving into) is right now on hold.

It’s possible that Monkey 2 was supposed to be a successor for Blitz3D and/or BlitzMax. But since that is now out of the picture and Monkey is going to have only maintenance fixes from now on, I wouldn’t expect new products anytime soon. Which means that Blitz is losing the last momentum it had and will undoubtedly be overshadowed by the competing products.

Soon after that statement, Mark wrote a follow-up in his blog. The article was titled “A slightly depressing update…” where he elaborates about the low Monkey sales. BlitzPlus was announced open-source 2 days later.

He agrees that marketing could be better, but in my opinion fails to see some important points that all, in my opinion, play part in the problem. Anyway… this post was depressing also to read, and it has caused a lot of stir and concerns all over Monkey and BlitzBasic forums. I have always thought Mark’s writing style is a bit too aggressive, but this is not how you do PR. Even though you had to get it off your chest, you shoul’ve picked your words better.

And now Blitz3D went open-source too. For BlitzPlus it made sense, but I really don’t get the reasoning behind this move. Yes, it’s old (15 years, huh?), but it’s still *the 3D dev tool* in their whole product line. All you had to do was to update its engine pipeline to use DirectX 11, and perhaps add more commands that affect the entities’ appearance. Maybe physics too. Make it compatible with the more modern Windows versions. Or maybe MacOS and Linux. I know that you couldn’t apply the syntax enhancements that BlitzMax and Monkey have, but quite frankly, you didn’t have to. People would still use it. You don’t have to (and shouldn’t) compete with your own products. Monkey and Blitz3D serve different needs; one is cross-platform and the other is specialized in desktop (it doesn’t matter if it had a simpler syntax).

The problem is that the Blitz Research Lab has too many products. Instead of adding to existing products they develop a new one. And each time the new product is expected to also contain everything than the previous product did. In addition, they also keep introducing these breaking changes to either libraries or language syntax. Even though it may seem like a good idea to release something completely new every now and then (and get paid for of the purchase) the fact is that you have a flaw in the business model.

Blitz products have always been buy-once-get-lifetime-updates-for-free. This can only take you so far. You basically have to develop completely new products regularly to get (partly the same) users to pay more. And there comes a point where the amount of work is simply too much (especially for one man) to sustain this pattern. Too little return for too much work. Now, look at how some other companies do it… companies like Microsoft, Adobe, Autodesk etc. They too publish a new version of their most popular software every 1-3 years. But they always build on top of the previous version. If you take a moment and look how Visual Studio, for example, has evolved since 2003… it improves each time, but is still definitely based on the previous version. Moreover C# and VB.NET have both evolved as languages too, adding new features and syntax. But the developers still view C# as a single language and Visual Studio as a single product (or brand). As a result, the products are more mature, too.

Even though I think Monkey is a great product, it simply fails at marketing. If you’re a programmer, I completely understand that there might not be any interest in marketing. The Blitz Research Lab is a really small company. I’ve got the impression that Mark Sibly is completely self-employed, and I don’t think there’s anyone else to help with the “boring” things such as writing documentation, managing websites, or doing accounting. it would help a tremendous amount if you had a sales guy, or even another developer (who could handle the editor and website, for example). But the problem appears to be that they can’t afford to hire anyone. It’s a vicious circle that feeds itself: you can’t do everything needed → products suffer → sales suffer → starts over. It’ll generate frustration, create stress, lead to burnout, lose of interest etc.

As someone in the Monkey forums put it: “Monkey’s major issue is that it simply cannot keep up with the perceived competition in the form of Unity, Gamemaker and other well funded and well teamed tools, and the gap is going to grow over time.”

I think that BlitzPlus and BlitzMax were mistakes. Mark shoul’ve just developed Blitz3D to have their features (while still targeting multiple desktop platforms). But the mobile revolution blinded him. He took a big risk with Monkey and it didn’t turn out as well as expected. While the new language features such as classes with constructors, and even polymorphism do sound good on paper the fact is that at that point why would I not choose C++ instead (especially when you lose your sweetest carrot i.e. 3D). One might argue that BlitzBasic as a language would then be outdated. But that’s fine because the game runtime and easiness to use it were the things that mattered most. Blitz3D had its target audience. But not anymore, and only because its tech is too old.

While Monkey supporting mobile platforms is indeed a really nice thing, the problem is that those platforms already have official SDKs that everyone uses; If you’re a newbie game developer who wants to create his first iPhone game the first googling will not lead to Monkey website. It will lead to the SDK download and developer community sites. Also, you won’t find a way to solve your coding problems in Monkey but with the “more-adopted” way instead.

I don’t know how Mark will overcome this, but I think making Blitz3D open-source without having a replacing product already available, was made in a hurry and without thinking. I don’t think Monkey has any chance at competing with mobile game development tools (2D or 3D) anymore, but I don’t know if introducing yet another product will be the desperately needed medicine either 🙁 All I can say is: Mark, get a job to stabilize the income, take a time to think about what you want to do with the BlitzBasic brand and products. Put Monkey on Steam if you don’t want to handle marketing on your own.

BlitzPlus open-source topic:
http://www.blitzbasic.com/Community/posts.php?topic=102473

Blitz3D open-source topic:
http://www.blitzbasic.com/Community/posts.php?topic=102907

Source codes on GitHub:
https://github.com/blitz-research

Mark’s blog:
http://marksibly.blogspot.fi/

Monkey roadmap:
http://www.monkey-x.com/Community/posts.php?topic=5548

Thoughts about MonoGame

I’ve been toying with this idea of using MonoGame for the new CoolBasic for quite some time now. In theory it would offer many benefits such as the ability to build cross-platform game engines with minimal code changes. And since it’s based on XNA’s design there’s a lot of material already on the web so it has the potential to boost development productivity a great deal. Sounds good, right?

MonoGame logo

I’m already rather familiar with XNA so I was tempted to just get started with the default Windows project template. But I wanted to see how you’d create a cross-platform solution with OS specific setup code. For the most part, the only thing that’s different for each OS is the way you construct the game window; after that you just instantiate the game class and the rest is in common across all platforms.

So how to get started with this cross-platform project? It turns out that the MonoGame dev team have some plans for providing a set of official samples that, according to their words, should compile against all of the supported platforms (Android, Linux, OSX, Ouya, PSM, Windows, Windows Store, and Windows Phone). This should be the perfect reference material how I should build my Visual Studio solution. It even appears to be using the MonoGame Nuget package so surely I could just download it, open it, and be able to compile it with no additional setup? Well, not exactly…

First of all, the Nuget package configuration is broken due to some versioning issues. I couldn’t solve this with the graphical Nuget package manager in Visual Studio, and had to manually install the package from the PM command line instead.

Secondly, the Visual Studio solution doesn’t include all the needed projects! As it turned out, the entire Content project (where you have your assets such as textures, sounds and fonts) is missing. It’s present in the downloadable though, but when I attempted to add it to the solution it’d just fail at loading because of an “unknown project type”. After some googling I learn that apparently MonoGame doesn’t have the Content pipeline fully implemented yet. MonoGame is at version 3.2, how on earth a feature this essential is still missing? Maybe you can load compiled .xnb assets and use them in game, but you can’t create them without the real XNA1. This gave me an important clue – could the unrecognized Content project type be the real XNA content project…

XNA cannot be installed into Visual Studio 2013 directly, but luckily there’s a workaround. Once installed, indeed I was able to add the Content project to the solution (you’ll have to manually create a solution folder named “Content” and import it there). I now have 1) the game project, 2) the game’s content project, and 3) the XNA content builder project. Then you can compile and run the example!

MonoGame sample game - Platformer 2D

The sample game is a simple 2D platformer where you can move left/right and jump. Except the game crashes if you jump. The reason was that MonoGame couldn’t find OpenAl.dll that is used for playing the jumping sound. This DLL is nowhere to be found within the downloaded package. I guess someone forgot to include the dependencies… moreover why is it even using OpenAL on Windows where DirectSound is available?

So I downloaded OpenAL and put the DLL in the output directory, then ran the game again. And it still crashes at playing the jump sound, this time due to some shady DivideByZeroException that originates from deep within OpenAL, completely out of my reach to debug or fix. The problem could be literally anywhere; maybe the sound is invalid? Did the content builder generate a valid xnb file? Did MonoGame load it correctly? Does OpenTK (yes, this is also included) do something with it before it ends up to OpenAL? Do I have the right version of OpenAL (since I had to separately download it)? I ended up commenting out all code that attempted playing sound. The game worked flawlessly after that.

This content building thing concerns me. If the primary way to loading game assets is via the content pipeline then there should be a fully working *official* way to compile the files into the xnb format. There are 3rd party tools, but if I was to use this for production, I really need the officially supported one. After some digging, it appears to be that one such tool is currently in development as a standalone application. The documentation states that it should come with the SDK, but it’s not there. So I downloaded the latest MonoGame source from GitHub in the hopes of being able to execute it from Visual Studio. Except that it doesn’t contain the full source code for it (for example, there’s no project file).

Finally I found the pipeline tool by installing the latest daily build from their TeamCity site (login as quest to view it). For whatever reason it’s installed into MSBuild under Program Files.

First impressions? Not very positive. I had to manually reconstruct the Visual Studio solution, adding and configuring the missing projects. Tools are missing or incomplete. Samples don’t build due to incomplete configuration and missing dependencies. Completely broken source code. Everything just seems so unfinished and unstable at the moment. And I merely touched the surface – who knows how many other issues like these there will be – especially when Linux or OSX enters the picture.

I’m not so sure anymore about the idea of enhanced productivity when working with MonoGame if I’m encountering bugs and unimplemented things like these right at the start. It may very well be that I end up wasting more time hunting weird MonoGame specific bugs than just use the real XNA (that just works). All in all, I was surprised how immature MonoGame still is, and I’m not really confident about its reliability and fitness for serious production use. For Windows I can always build the engine (more optimized even) with SharpDX. Maybe for other OS’ I’d fall back to MonoGame, when it’s hopefully in a more polished state.

Branching rules

Last time I talked about synthesis and IL generation. Currently, half of the statements already write valid byte code. I have now completed the branching infrastructure that I mentioned in my previous blog post, and most conditional structures and loops are now prepared – including the If, While, and Until structures. The For-Next loop and the Select-Case structure are next. But before going there I decided to wrap up a quick summary about branching rules and what they mean in terms of code compilation.

First of all, there’s a mechanic in place that eliminates unreachable code. For example, if you had an If condition that has a constant expression that always evaluates to false, the compiler knows this during compile time and will not write the byte code for that block. Similarly, if you had a While loop that is known to be always true the compiler doesn’t write the byte code for the expression at all. Code elimination also applies to nested code blocks, meaning that everything inside an unreachable code block is ignored when the byte code output is written. Next, let’s look at the specs:

The If structure

  1. The If and ElseIf statements must know the next ElseIf/Else statement in order to branch when the expression is false
  2. The If, ElseIf and Else statements must know the EndIf statement that is associated with the condition chain; The If statement needs it in order to branch when the expression is false, and the ElseIf and Else statements need it in order to issue an unconditional jump to indicate the end of the previous block
  3. The ElseIf and Else statements must add an unconditional jump to the corresponding EndIf statement before any other byte code unless the statement in question is and ElseIf with a constant value expression of false
  4. If the If/ElseIf expression is a constant value, don’t write expression byte code
  5. If any of the If or ElseIf blocks evaluated to a constant expression of true, don’t write byte code for any of the subsequent blocks in the same chain
  6. If an If block or an ElseIf block has a constant expression of false, don’t write its byte code. This includes nested code blocks

In addition to the rules listed above there are some minor tweaks that accomplish cleaner byte code output; I’ve eliminated some branching instructions that are not needed, for example.

The Repeat-Until structure

  1. The Until and Forever statements must know the corresponding Repeat statement in order to branch to the correct location
  2. If the Until statement’s expression is a constant value of true, do nothing
  3. If the Until statement’s expression is a constant value of false, add an unconditional jump
  4. The Forever statement just adds an unconditional jump

The While-EndWhile structure

  1. The While statement needs to know the corresponding EndWhile statement in order to branch when the expression is false
  2. The EndWhile statement needs to know the corresponding While statement in order to issue an unconditional jump
  3. If the While statement’s expression is a constant value of false, don’t write byte code for the expression or code block
  4. If the While statement’s expression is a constant value of true, don’t write the expression’s byte code
  5. The EndWhile statement only adds an unconditional jump to the While statement

Controlling loops

The old “Exit” statement is now renamed as “Break”. It exits the Repeat, While, For, and Foreach loops, and continues execution from after the loop.

Similarly, a new loop control statement has been added. The “Continue” statement continues the encapsulating loop from its next iteration pass, but doesn’t exit the loop unless the associated condition dictates so.

The implementation of these is quite straight-forward; The Break statement needs to know the loop’s ending statement, and the Continue statement needs to know the loop’s starting statement. Both issue an unconditional jump.

I haven’t yet implemented these, and will retain doing so until I get the For-Next structure done. I might write a similar blog post about the For-Next structure and Select-Case structure next time…

Moving from analysis to synthesis

Now that I’m on my summer vacation, I’ve had some time to return to the CoolBasic Classic compiler again (after a month or two of slacking, pardon me), and I’m happy to say that the analysis phase of the compilation process is now basically done. After code analysis the compiler has all the information it needs in order to produce the final byte code output. This phase is called “synthesis”, and it executes by recursively iterating through all statements based on their scope. This approach enables the compiler to omit byte code generation for unreachable code blocks such as If/ElseIf/While/Case blocks that have a constant expression that evaluates to false. Moreover, we could have those user-defined Functions and Subs who aren’t used anywhere in code not to be included to the output IL at all. However, it’s important to process all those code blocks fully even though they wouldn’t get written to IL because we want the compiler to validate the entire source code.

Some of the statements are already written into the output byte code stream such as the assignment statement and a Sub call. Similarly to function calls, the compiler inserts any omitted optional parameters and injects any type conversion operators that might be needed. There are still dozens of statement types to process, but all of them should be quite simple to implement. I decided to tackle the If/ElseIf/Else structure next because it requires some branching infrastructure I need to develop first. Once that’s sorted out it’s easy to implement other program flow control statements such as the While and Until loops.

Branching i.e. jumping is an interesting topic on its own because it’s a common key element and therefore needs to be very efficient. Labels are local to their enclosing Function or Sub symbol, and also the Root (the main program) has its own label “scope”. This means that the programmer can have identically named labels as long as they exist in a different “stack fragment”. The Root/Function/Sub symbols encompass a dictionary of labels, and they’re populated already in the parsing phase so that all labels are known during synthesis. The compiler can therefore match forward jumps without expensive scanning. On the other hand, this technique only applies to GoTo statements – many other statement types need to generate some jump targets.

Statements are processed in the natural program flow order during synthesis. For example, an If statement is processed before the corresponding EndIf statement, but the If statement in question still needs to know where the EndIf statement is in order to jump to the proper location if the expression is false. The compiler needs to link all cognate If/ElseIf/Else/EndIf statements together so that they can access the next member in the chain as well as the final EndIf. For branching to work, the compiler stores the byte code pointer of each statement (labels are considered statements as well even though they don’t generate any IL) when they get iterated by the synthesizer. Of course the If statement can’t know the final address of the EndIf statement upon processing because the EndIf doesn’t yet have the calculated offset, i.e. forward jumps cannot be determined fully at this point.

To solve this problem, all jumps (apart from GoTo statements) are cached in a special list that consist of tuples of an IL jump instruction and the target statement. Just before the IL is physically written to a file the compiler simply iterates this list and fills in the correct jump instruction addresses with the calculated code offsets found in the completely processed statements.

We’re quite close from having a working compiler prototype for internal testing 🙂

New web server

Just a quick update. As you can see the blog now has new looks. This is part of a larger configuration regarding our web services. There’s now a new domain, coolbasic.fi that currently redirects to coolbasic.com. We now have a virtual web server instead of a standard web hotel, meaning that we can execute whatever we want on it. We recently migrated the forums to this new server, and everyone’s DNS should be up to date by now. There’s still some setting up to do such as a custom user account database and a complete CMS solution for the future website; the plan is that everything under coolbasic.fi will feature Finnish content only, and coolbasic.com will contain the international site. There will also be different discussion forums based on localization.

As a bonus note, we were 11 people from the Finnish CoolBasic community who participated in a summer cabin holiday just now. Back there, they managed to forge some motivation in me: An interesting community-driven project exists that basically implements a custom runtime & game engine for the old CoolBasic. It’s called “CB enchanted“, a very cool name I might add, and is a reverse-engineered CoolBasic byte code interpreter packed together with a fully hardware accelerated graphics system. Quite impressive! The real WTF is that these people managed to reverse-engineer the entire byte code structure, inject the engine as part of standard CoolBasic compilation process, and deliver overall better performance compared to the good old CB – and all this before CoolBasic Classic 😀

Yeah, I opened the compiler solution a few times and thought: “It’s time to finish this.”

We’ve got some real talent.

Compiler news

1,900 word wall-of-text incoming…

I returned to CoolBasic project after three months of learning XNA 3D game programming, and have been continuing to develop the CoolBasic Classic compiler for the last two weeks. Although the main focus has been with the compiler I also designed a new look & feel for the web portal, and it’s being reviewed by the rest of the Team. This entry, however, will be all about the compiler and what’s been done recently.

Can it produce byte code yet? Yes, to some extent. The underlying mechanics are in place, but nothing is written to any file yet. Basically, it can process expressions. This includes (but is not limited to) calculating the result value of a constant expression, resolving names (i.e. mapping the identifier names to their declared symbols), function overload resolution, automatic type conversions, and filling in the missing arguments for optional function parameters. A lot of thought has been put to optimize performance here. In fact, all those things mentioned above are done in a single iteration over the postfix presentation of the expression. I had to inspect how the .NET framework internally works with arrays, stacks, lists, and type conversion to make sure my algorithms were efficient enough. I actually ended up writing a few own implementations for those highly specialized cases I needed (where the framework equivalent would allocate memory in an inefficient way or do more work than needed, for example).

I’m about to go technical…

Expression pre-evaluation
Where I left off before my XNA experimentations, was the constant expression pre-evaluation. This is when the actual expression processor needs to be implemented. You first convert the expression into postfix notation and then evaluate it. Naturally constant expressions can contain constant symbols as well, so a circular reference safety check exists. Identifier names are resolved during evaluation because we need to know the data type of each value in order to determine the final data type and the ultimate value in case of a constant expression.

The expression processor is a single method any statement or symbol processor can call. Therefore we don’t know whether the expression is a constant expression or not (i.e. whether its value can be fully pre-calculated). When a symbol is encountered the first time, it will be “processed” (as in a constant symbol’s value needs to be known before it can be used in other expressions). Since the expression processor can be called by a function symbol’s own processor, in order to pre-evaluate optional parameters’ values, a circular reference safety check must exist for functions as well.

The constant symbol and parameter symbol processors simply check whether the result value was a constant and give an error if it isn’t. In addition, before the pre-evaluated value is assigned to a constant symbol or parameter symbol, it’s converted to the destination data type. I had to write my own optimized routines for this because the .NET framework is clearly a bit too slow with anything<-->string. And I learned how much pain in the ass it can be to convert a double value to string (just have a look at dtoa.c to get the idea) – I ended up implementing a much simpler algorithm for that conversion.

Even though constant value expressions are fully pre-calculated, I plan to add an expression simplifier for non-constant expressions as well in the future. It would basically turn expressions like a+b+2*20-c into a+b-c+40. However, this is a very difficult topic that will need much research in terms of grouping and ordering analysis, and I simply don’t want to be held back because of it (even though I have a partial implementation of it in the V3 compiler already). I don’t see it that important at the moment (in order to get this thing out someday I’ll leave it for now).

Name resolution
The expression is processed once token by token. Each encountered identifier is resolved by its name. This process is called name resolution. I have three special resolvers: Type resolver, Symbol resolver, and Overload resolver. The type resolver is called by a symbol processor (each symbol is validated by its own processor). A symbol processor exists for all symbol types. Most symbol processors resolve the associated data type, but some do additional work such as constant symbol processor who calls the expression processor in order to cache the pre-calculated value. The type resolver is the simplest one: since all type symbols must be declared in the root scope we can direct the search to that to begin with, and also only look for symbols classified as Type.

The identifier resolver is a tad more complex. It is told the context symbol and/or scope and whether the search should be locked to that context only or should it also be extended to upper levels if the name is not found immediately (one example of a locked context would be the dot notation path “a.b.c”). Unlocked search is recursive. In CoolBasic Classic, the main program exists in root scope, but it is not the root symbol; the Global symbol is the ultimate root node of the symbol tree, and all imported framework/engine functions such as LoadImage or MoveObject exist there. This means that you can override them by defining your own functions with the same name. The reason we tell the resolver both the context symbol and scope is that, for example, functions’ parameters don’t have scope during optional expression evaluation. In order to access constant symbols defined in the main program’s context, the search needs to get one symbol level up instead of scope level. Furthermore, functions’ local scopes are isolated from the main program (for obvious reasons), and local identifier, a variable or constant, name resolution cannot therefore access the main program’s base scope’s local variables and constants.

The overload resolution is an interesting one. First of all, name resolution has already succeeded on a function group symbol. Yes, we now have an extra container for all functions of the same name. It stores all overloads of that function. All we need to do is to pick the right overload and map the identifier to its symbol. So we pass a snapshot of the calculation stack to the overload resolver, and based on the data types of those values, the most appropriate overload is chosen. This means that you could have two functions of the same name, say Abs (like ‘absolute value’) that takes and returns an integer, and one that takes and returns a float. The compiler would then pick the right one based on the context in the expression, avoiding unnecessary casting and loss of precision due to choosing the wrong one. Should there be more than one equally qualified overload candidates, an error is generated. Should there be no qualified overloads at all, an error is also generated.

Completing expressions
Now what’s really new here is the injection system. It allows the addition of tokens in the middle of the token list without causing “memory move” operations like List.Insert() does (and yet we’re not operating with a linked list here – it’s not efficient enough in .NET, and I want to avoid the memory overhead generated by it). Omitted arguments for those function parameters classified as “optional” are a good example of injected tokens. But the feature also has one other very important use, for we’re also injecting type conversion tokens. This works with intrinsic data types (integer, float and string). Whenever a value of the wrong type is provided, CoolBasic Classic will try to convert the value to the correct destination type. For example, if you provide array indices as floats, they’ll be automatically converted to integers. Similarly, the function overload resolver tells back which values need to be converted and to which intrinsic data type.

We’ll probably be adding explicit type conversion operators to the language at some point as well; we’re just not sure about their naming yet. Just don’t be disappointed if you don’t see them in the first alpha or beta.

Another cool feature I’ve mentioned before is the presence of short-circuit And and Or operators. They are now fully implemented. They’re different from other operators in that while pre-calculation occurs in the same way as processing any binary operator in a postfix calculation stack, they actually produce byte code with conditional jumps instead of a single operator instruction. The hardest part was to infer the offset of the jump because type conversion instructions as well as loading omitted optional parameters to the instructions list occurs all the time (so that the offset cannot be determined during the conversion from infix to postfix). However, I came up with a clever mechanic that allows the reliable offset calculation without having to give up the idea of a single processing pass.

Byte code generation
I mentioned at the start of this blog post that I’m already able to produce byte code that would calculate the expression’s value in real Cool VES environment. A lot of different small parts had to be implemented before this goal was reached, but I think it’s now working quite well. The expression processor, in addition to calculating the result value, generates the full instruction set, including conversion instructions, short-circuit magic, and injected parameters.

One particularly tricky part was to implement dot notation path processing. While simple paths such as “a.b.c” are quite easy to pull off (just lock the name resolution context to the data type of the previous member’s value), it gets a little more complicated when assignment and arrays come in to play. I hate “exceptions to processing rules” so I had to come up with a unified model that just supports normal values, dot fields, dot array fields, and normal arrays. Array variables are always pointers to their actual buffers, but the value indicated by the index (or indices) on top of stack must be read by another instruction. And since context must be locked for the name resolution to succeed, unlike functions, the array field loader instruction cannot occur after argument values (and you cannot access elements not on top of stack which I could’ve done by creating my own custom stack structure, but it’d still be fundamentally wrong thing to do). So for an array access you need two instructions; one before and one after the arguments. Just like how C# and VB.NET compilers do it. It works now. Actually, byte code generated by CoolBasic Classic compiler is VERY similar to Microsoft CIL (Common Intermediate Language) generated by .NET compilers. For example, there’s no single instruction for operator “<=”, but it’s expressed with the combination of cgt, ldc.i4 0 and ceq instead – i.e. “not greater”.

Still few operators lack implementation of pre-evaluation support, but those should be painless to write in no time. One of these is the assignment operator (that again is a bit of a “different” case from the others), but perhaps I’ll get into that in the next blog post. Now this huge central piece is mainly done, the next big goal is to start iterating actual statements like If, While, For etc. Pretty much branching in general.

TL;DR
Expressions are now processed, and refined into real byte code that can execute on Cool VES platform. The next phase would be to process all statements, and with the help of the expression processor, create the final byte code output we can execute!

Copyright © All Rights Reserved · Green Hope Theme by Sivan & schiy · Proudly powered by WordPress