I've joined this forum just so that I can post in this thread (!)
Your framework is pretty similar to one I have been developing (or at least conceptualising) for a few years, with incarnations in C++, Python and currently C#. Along the way I have hit a few roadblocks that I have overcome somehow or another - I'd like to discuss my approaches and see if you have any suggestions!
One of the first hurdles was component interaction - I realised that, for example, the CollisionComponent would need to notify the HealthComponent of damage, or modify the PositionComponent to rectify overlap. This all got quite messy with various event handlers and messages being sent around when I realised that the solution was glaringly obvious: to use components simply as data, not processes. I abstracted the processes themselves out into what I now call Systems (familiar?
) that could operate over whatever data components they wished.
My first decision with the Systems was to implement a list of Components in which each was interested. For example, the RenderSystem would require a PositionComponent and a SpriteComponent. Whenever a component is added or removed from an entity, the new collection of associated components is passed around to Systems so they may log their interest (variant on the Observer pattern). Having poked around in Artemis, I see that that is pretty much exactly what you're doing. I only wonder why you decided on a bitset implementation? Is the restriction worth the performance gain? Considering you probably won't be creating entities or adding/removing components THAT often.
With the new Systems, I had a few use cases in a game engine to consider, which I'll go through now:
Some objects may have a SpriteComponent, while others have a ShapeComponent that merely needs to be drawn using primitive lines, etc. Issues: how does the RenderSystem differentiate between the two, how does the Component list handle this scenario, and how does the System process each case?
: Use different RenderSystems for each type of render component. I settled against this, because rendering needs to be done in order of the Z-axis, not in order of render-type.
: Extend the logic of a System's required Component list to include boolean comparisons to allow better description of what the System requires. In this case, the required components would be (RenderComponent && PositionComponent && (SpriteComponent || ShapeComponent)). I implemented this solution as a virtual boolean function that operated over an array of boolean values corresponding to the ComponentList, and am fairly happy with this approach although it is a little complex.
In terms of differentiating between various different components and processing them, I am tempted to use an overloaded dispatch function or just some nested-ifs, ugly as they may be.
With first-hand experience in the pain of collision detection and response, I wanted to make sure I nailed this one right off the bat this time. My initial impulse was to use two systems, CollisionDetectSystem and CollisionResponseSystem. The CollisionDetectSystem would pass collision information through a CollisionComponent to the CollisionResponseSystem. The benefits of this approach are the clear separation of logic and the ability to happily insert any other System between the two Collisions. However, the data passed in the CollisionComponent is superficial and not persistent, so I have my doubts as to whether this is truly necessary. One thing I do know is that I'll probably want to subclass the CollisionResponseComponent and use a form of dispatch to handle all collisions.
This doesn't really concern the framework but it was a use case that I needed to consider when hammering out the details.
The issue of the HUD is a thorny one. I think earlier in this thread someone mentioned a health bar and how to implement it using this kind of framework. The solution that I've come to is still quite vague as I am yet to implement any of it but I like the idea. Key issues in this area: What IS the health bar, and what components does it have? How does it interact with the RenderSystem?
: The health bar is an Entity. In order to track the health of the player, it has a TargetComponent which contains a reference to the player Entity. It must also have a PositionComponent to determine where it lies on the screen. When debating how to handle the two different types of position (world-relative and screen-relative), I stumbled onto an idea that I'm happy with: to allow the PositionComponent to specify which 'world' its co-ordinates lie on. The game world would be one world, and the HUD would be another - then, the window screen simply tracks its movement through each different world. This allows any HUD or menu entity to behave exactly like an ingame entity - something I really want to strive for in my framework.
The only part I'm concerned about is how to render the health bar - should the RenderSystem really have to know how to render this particular piece of information? I'd have to make a special exception for the health bar entity in the RenderSystem's processing code, and that's a slippery slope I'd rather not go down. I am leaning towards giving it a ScriptComponent which would execute every frame and query its target's HealthComponent
I realise this is kind of a wall of text but these are the ideas that have been floating around in my head since 2008 and it's nice to find someone else who has put so much effort into something very similar, and would v much appreciate your feedback/discussion!