Envision, Create, Share

Welcome to HBGames, a leading amateur game development forum and Discord server. All are welcome, and amongst our ranks you will find experts in their field from all aspects of video game design and development.

What are you working on? (Game Making Thread)

Jason

Awesome Bro

Just told Potion a crazy idea for what I'm thinking of doing with Over The Horizon... it's going to retain the same for the most part, like 99% the same as I was wanting it BEFORE I thought of this idea, lol... but it's going to be interesting methinks, could do with an extra person to help me with story padding and character development though, if anyone is interested, lol.
 
I've got a few nice puzzle ideas!
I attempted them in Rpg maker, but Rpg maker is shit, so I'll attempt in C2.
If I can pull this off it will be awesome for a game idea I have!
 
I have about 20 quests written up, I just keep stumbling when going to implement them, things always come up and I get bored. The only ones I've managed to implement are short and boring errand runners with no storyline. I have big ones planned, that open up new continents and have hundreds of NPCs (the game currently has only 63!), I just can't get the enthusiasm up!
 
I finally got around to updating and fixing my transition.dll source code. Now I have vague patched in and working, and I'm wrapping up a script that integrates into the Graphics class to overwrite the default transition setup with it.
 
After lots of testing I am relatively sure afar would work nicely as a kindle fire app... but I'm not sure if doing so would be pointless. It wouldn't bring any money or anything. I thought about maybe having it say "if you enjoy this game, pay what you think it's worth" like humble bundle, but I'm not sure anyone would. It's not really about the money, I'd just need some funding if it became popular at all, so I'd need a plan in place if I made it into an app AND it took off.

It's not great right now but I think if I implemented all the planned quests and had regular updates it might be good... I don't know.
 
So I tossed my updated transition dll and scripts to theory, since he was playing around with making an oversized window in VX Ace. My god, the oversized transitions with the vague argument (it's a fade effect for all transitions) are so beautiful.
 
Your transition stuff, I'm guessing it uses the RM bitmap class?

You're kind of giving me some ideas. I'm guessing the transition takes a snapshot of the screen and then does effects on that one snapshot? Might be able to use my GLSL shaders on that and as it's a single snapshot the performance probably won't be so bad.
 
Xilef":3aqjhs36 said:
Your transition stuff, I'm guessing it uses the RM bitmap class?

You're kind of giving me some ideas. I'm guessing the transition takes a snapshot of the screen and then does effects on that one snapshot? Might be able to use my GLSL shaders on that and as it's a single snapshot the performance probably won't be so bad.

Yeah, it's a variation on the old .dll file I had. Theory seems to be having fun with hardware accelerated graphics at the moment, so I'm waiting to see what he's cooked up.
 
So, we're apparently working on effectively the same thing you were, Xilef. Only, we're not. Rather than going for accelerated filters, we're doing something interesting and making a hardware accelerated equivalent to a default class. Specifically, we're replacing the tilemap. I'd share a screenshot right now, but I have a version where texture loading is broken and it has the wrong offset.
 
I don't know enough of RGSS to make a tilemap class, pretty interesting stuff. Having hardware accelerated sprites would open up a ton of opportunities with special effects and 3D stuff.

How is it being done, though? Drawing it all on a GPU page then copying the result to a GDI bitmap which is rendered to the screen?
 
Glitchfinder":vpbzg4cu said:
That sums it up nicely, Xilef. Theory even managed to find a plugin that lets us copy to and from the bitmap without dropping acceleration.
Which plugin is that? With OpenGL the standard option is the buffer copy, where you pause the GL driver and copy the current buffer's bytes direction to a CPU memory buffer (RGSS bitmaps in our case), is there a plugin that lets you draw directly to a GDI bitmap with GL/DX?

EDIT: I know you can draw directly to the GDI context (But there is flickering due to the vertical refresh), I showed a screenshot of a triangle on the RMVX Ace title screen which did this for a single frame to avoid the flicker, have you guys it to target GDI bitmaps as a context?
 
I'm not well versed in OGL myself, but from what I see in the code, it would appear that instead of having a screen-based render target, our target is actually an RM* bitmap memory buffer.
 
Basically, what we're doing is this:

First, in RGSS, we create our target viewport, and a bitmap set to its dimensions. We pass the dimensions, and a pointer to the first byte of the bitmap's data, to our dll.

In the dll, initially, we create an FBO (via extension, to allow support for pre-3.0). The FBO is sized to the target viewport dimensions. Rendering directly to the bitmap data causes OpenGL to bypass hardware and use a software rasterization (a DirectDraw wrapper). Ewwwwww.

So instead we render to this offscreen FBO, gaining all the perks of hardware acceleration. All that's left is to get the pixels on the screen - so we take that bitmap pointer, and after figuring out the memory structure of an RM bitmap, we use glReadPixels to copy the FBO directly into the target bitmap, with no additional copies or conversions. This was the tricky part, and is mostly thanks to Glitchfinder for wrapping his head around the Ruby/RGSS bitmap data structure in RAM so that it could be written to directly from OpenGL.

So yes, it does technically pause the card and read pixels, but this only happens exactly once per frame, and is lightning fast compared to the software rendering (DirectShow) of vanilla.

Hell, it runs quite playably on my laptop which can't even handle the vanilla tilemap, heh.

Edited for clarity.
 
That last bit is an understatement. Our in-progress, unoptimized code increased your FPS on that ancient, halfway-broken laptop by anywhere from 100% to 1,025%. Presumably, it will be better after optimizing the code.
 
avarisc":37z9qg9b said:
So instead we render to this offscreen FBO, gaining all the perks of hardware acceleration. All that's left is to get the pixels on the screen - so we take that bitmap pointer, and after figuring out the memory structure of an RM bitmap, we use glReadPixels to copy the FBO directly into the target bitmap, with no additional copies or conversions. This was the tricky part, and is mostly thanks to Glitchfinder for wrapping his head around the Ruby/RGSS bitmap data structure in RAM so that it could be written to directly from OpenGL.
So you're drawing to a FBO then copying to an RM* bitmap?

That is exactly what I did, you'll find that glReadPixels is slow on some hardware compared to others, you should test on many GPUs, I found that it was painfully slow on Intel integrated GPUs and fastest on AMD cards (But then again, I was also reading from the bitmap to a gl texture up to 4 times per frame, which is also massively slow).


I'd like to be involved in this at some-point, if you're drawing to an FBO and then copying the pixels then that means you can also plug in a post processing step between those two phases.

You can also do hardware accelerated 3D rendering at any moment during this, so you could potentially make the fastest "mode 7" effect RM* has ever seen (Haha).

EDIT: To clarify, you're not writing directly to the bitmap, are you? You're writing to an FBO and then copying the pixel data into the RM* bitmap struct?


And one area that I've been researching before re-approaching this GL RM* business is pixel buffer objects:
http://www.songho.ca/opengl/gl_pbo.html
Scroll down to the "Asynchronous Read-back" section

EDIT2: I'm also interested in what GL version you plan to use
 
The FBO and its storage reside on the video device. All rendering is done in one go - I can wait and use as many callbacks as i like, and when I'm sure I'm all done, I glReadPixels directly into the RM bitmap- the struct was only used to calculate the offset in RAM of the first pixel, and glReadPixels starts dumping BGRA there.

Here's another critical difference - I never, ever read bitmap data from RGSS into OpenGL. Doing that every frame would be an absolute FPS killer. I load the tilesets myself using SOIL directly in C++ (which doesn't currently support RTP magic paths, but I don't want to make end users install RTP anyway.) The filename to load is passed in from RGSS, however. So all I do is once per frame dump those pixels to a single screenspace bitmap. And I've tested in on an intel integrated, definitely slower than decent hardware but still all in all, after doing all my tilemap math in C++ turned out to be a very significant gain.

As for what version of GL, right now I'm trying to keep it to 2.0 (only requires 1.1 at the present moment). I *may* opt to change my mind on this, depending on later needs.

As for Mode7, heh. Doesn't compare to what I've got cooking. There's a reason we needed the extra horsepower, after all.

The tileset is presently rendered in accelerated 3d. I've used orthographics to make it look 2d, but I can break away from that with no performance impact.
 

Thank you for viewing

HBGames is a leading amateur video game development forum and Discord server open to all ability levels. Feel free to have a nosey around!

Discord

Join our growing and active Discord server to discuss all aspects of game making in a relaxed environment. Join Us

Content

  • Our Games
  • Games in Development
  • Emoji by Twemoji.
    Top