Envision, Create, Share

Welcome to HBGames, a leading amateur game development forum and Discord server. All are welcome, and amongst our ranks you will find experts in their field from all aspects of video game design and development.

What are you working on? (Game Making Thread)

In rmxp you could combine shake screen with screen transition and get a neat shockwave or distortion effect depending on what pattern you used. But you can't move or have anything else going on while the transition is in progress.
 

Spoo

Sponsor

MXL990 got here so I'm looking forward to trying it out. I'll be able to practice hosting more once I start my new job in a couple of weeks. Any of you guys feel like being my guinea pig co-host over Skype? I want to set up a mix-minus loop to run the call through the mixer and directly into Audacity. Saves me the trouble of adding in and syncing additional recording in post.

Gonna also play around with sound effects, background music, and voice effects.
 
Made a frustum culling system for my next work project. Totally threaded too, which is nice, however balancing the threads has been somewhat difficult.

I need to improve the scene graph crawl as right now it will dump the update of all a node's children as 1 work batch and the update work is surprisingly light-weight so that needs a total re-factor.

On my 12 threaded system, 6 physical, I started off halving the thread count to produce worker spawns (so 12 threads = 6 workers). I changed that to 1/4 and had performance gains, so now it's at 1/3 which should be a good balance.
 
I'm obviously still working on Sore Losers: Riot Grrrl. I've decided to revamp (or straight up replace) so many things that it's kinda ridiculous; but as with all the battle animation replacements I've made, it'll definitely be worth it!

The first thing I'm changing is the item placement around the levels. This is partly because I reorganised the database and so some old variables need pointing to new variables; partly because I didn't do it in a very structured or balanced way the first time around; partly because I have a lot more weapons now and so I'm replacing some of the item placements with weapons; and finally because I'm redesigning the "search minigame" and so doing the item placements at the same time makes a lot of sense.

The second thing I'm doing is the redesign I just mentioned and recently posted a screenshot to show off, so I'll go into more detail here. I've basically been cleaning up the "search minigame" so that there's no longer a worded tutorial (there's now an on-screen command prompt) and so that the rest of the screen fades-to-black whilst you're playing the minigame. The latter is actually way more effort than it sounds because the minigame's "search bar" is a charset, which means that using the "tint screen" command would black-out the "search bar" as well as the background graphics! The idea behind these changes is to make the minigames flow better (the same logic is going to be applied to all the other minigames - and several of the minigames are going to be changed to new minigames because reasons!) and to make it so that the important parts of the minigames don't clash with the background graphics; for some of the small "search bars" it looks pretty simplistic, but it makes a lot of the more complex "search bars" far easier to tackle!

Here it is in motion. For clarity, there are two search attempts shown here. The first one misses and the second one hits. The .gif then loops back to the first attempt... and before anyone asks, of course I missed the first attempt on purpose!

Searching_Sequence.gif
 
Been working a bit again on my Genesis/Megadrive game, with making some temporary-ish spritework for the player.
Spritedude.png
Spritedude_walk.gif


Will probably need a few little touch ups before I chop it up into more efficiently used tiles. That, and make some jumping, looking, shooting, etc. sprites too.
 
Made a threaded PNG loader that reports how much of the image has been loaded, so the GPU thread can load scan-lines as they are ready.

Part of the "remove all load-times" stratedgy. The implementation is very (like, VERY) crude right now so I'm currently working out a design pattern on paper.

The plan is to make a collection of streamers for different file formats and the GPU thread will treat them all as generic texture loaders and stream their progress regardless of what type of loader they are.

As an example; a JPG loader would mipmap during load-times and stream in the mip maps from low resolution up to high resolution, with the GPU thread upping the mip level each time one is complete, resulting in high resolution JPG images fading into the scene as they load without and delay.

Right now the PNG test loader loads each scan-line. I think I'll end up keeping it like that as libpng doesn't support decoding mips. My other plan is to accelerate it further with GPU buffers holding the texel data, if I do this I'll probably make the PNG textures load in from both directions, with the GPU thread swapping between loading lower and upper blocks as it flips the buffer between read/write.
 
CMPaiOGWwAAsX3h.jpg


Drawing out additional sections for level three of SL: Riot Grrrl. Having gone through the level a few times whilst updating the search minigame and moving around items, I felt it was missing something when compared to the second level. That tray of paper/folders in the background are also game development notes...

...mostly anyway. There are also some Starcraft notes from when I played that!
 
Updated my new threaded PNG loader to include palette support; The system will work out how many of each colours there are and from that determine the ideal bit depth. eg; 13 red colours would presume that 4 bits is enough for storing the information.

Some colour quality is lost, but it is much nicer than having the old libpng expand to RGB space.

Only issue is Grey and Grey_Alpha PNG images are encoded as GL Red and Red_Green respectively. There isn't a nice way around this that doesn't involve expanding to RGB, so I'll just have to let it be a limitation to be solved in the shader.
 
Xilef":3m6ucczh said:
Made a threaded PNG loader that reports how much of the image has been loaded, so the GPU thread can load scan-lines as they are ready.

Part of the "remove all load-times" stratedgy. The implementation is very (like, VERY) crude right now so I'm currently working out a design pattern on paper.

The plan is to make a collection of streamers for different file formats and the GPU thread will treat them all as generic texture loaders and stream their progress regardless of what type of loader they are.

As an example; a JPG loader would mipmap during load-times and stream in the mip maps from low resolution up to high resolution, with the GPU thread upping the mip level each time one is complete, resulting in high resolution JPG images fading into the scene as they load without and delay.

Right now the PNG test loader loads each scan-line. I think I'll end up keeping it like that as libpng doesn't support decoding mips. My other plan is to accelerate it further with GPU buffers holding the texel data, if I do this I'll probably make the PNG textures load in from both directions, with the GPU thread swapping between loading lower and upper blocks as it flips the buffer between read/write.

I want to apply something like this to voxel world data.
 
avarisc":2i2hkn0w said:
I want to apply something like this to voxel world data.
The problem to solve with that is having to jump over a file format to figure out where in world-space you need to load. I can imagine it being a pain to figure out.

I've noticed that OpenGL storage formats do not always have an associated load format; GL_RGBA2, for example, cannot be loaded but may be stored.

Ended up having to change the palette loading system I wrote so it would expand to the best-fitting storage. Means some quality is gained, but memory consumption during load is increased.

Also implemented texture matrices as a default shader uniform;
262xpio.jpg


Motivation behind that was the fact that in previous projects a lot of texture mapping manipulation was done (sheering and scaling). In the past I didn't implement texture matrices, even when I should probably have used them.

One case in mind is the UDMF map format where the walls could have tiling textures. I used to do this via UV co-ords, which ended up with lots of geometry being generated for 1 map. Now with texture matrices, the walls can all use the same generic quad-plane and the scaling/sheering is done as a uniform matrix (which is uploaded before and drawing even begins, I am using near-zero driver overheard methods).

You can imagine a map with 200 walls; if all the walls were mapped and sized differently my previous renderers would generate 200 different geometries on the GPU, now only 1 geometry is stored and is bound only once and every single wall can be drawn in a single draw call.

Another thing I've been working on is bindless textures. Switching what textures you are using is expensive in OpenGL, the texture units are there mostly to make it so you can reduce switching (32 different textures can take a unit, each draw will select the unit where its texture lives).

Now what happens is the GPU will report a uint64 handle (which is usually a pointer in GPU memory) and that is used as the handle, no texture units or texture binding is used at all. A very big optimisation for devices that need it.
 
I woke up yesterday with a fully formed story in my head. Jotted it down in like 20 minutes. When I read over it last night I realized the characters had no motivations. They had personalities. They were active. But if you stopped and asked why they did anything I wouldn't be able to answer it. But I suppose a lot of stories are like that.
 
coyotecraft":2fn4c7p3 said:
I woke up yesterday with a fully formed story in my head. Jotted it down in like 20 minutes. When I read over it last night I realized the characters had no motivations. They had personalities. They were active. But if you stopped and asked why they did anything I wouldn't be able to answer it. But I suppose a lot of stories are like that.

Totally agree with you here. You are right most of the stories are like this.
 
Actually, I think the characters were just missing an emotional state - I was thinking the only way to color that in was with motivations.
But I learn an important lesson on differentiating dialogue, and probably why I've never been good at writing dialogue. You've probably heard how its important for characters to be distinguished from each other. I always thought of this as verbosity - different word choices, dialects, using colloquialisms, ect...
But revisiting some writing blogs, they pointed out how an apparent difference of age between characters was enough. Or a Master - Servant relationship for example. But of course that didn't help me at first because usually my characters are friends about the same age, same class, ect... This blog went on, in a situation where two characters are pretty similar, maybe they're clones, they can still be distinguished simply by having 1 character leading the conversation. As long has that character stays dominate and it doesn't change, you'll be able to distinguish the two.
A light came on in my head. Of course, emotional states are also a way to differentiate character dialogue. I never really think about what a character's emotional state should be when writing a scene cold.
 
Working through creative blocks.
Currently focusing on game writing and story beats. I know what needs to be established, it's just a matter of breaking it up for player consumption. For immersion sake, it can't all unfold in one cutscene. But at the same time, I have to make sure the player doesn't miss critical information.
Dossiers work for survival horror games because they act as clues. They can work in RPGs as flavor text but not as a glossary of need-to-know terms; although it's a cleaver way to avoid characters talking and explaining things they should already know.
 
And after watching some sweet battle systems on youtube, I'm re-motivated to try tackling another creative block again which is animated battlers. Specifically, attack and magic animations. I can come up with any range of motions, but I haven't found the confidence to go with any one idea. I need some kind of design theory that'll assure me what is the right choice, and I don't know where to look for a source of wisdom.

Like, Fire Emblem's animations are snappy. But I'll also be extending the number attacks as weapons level up. I want it to have more finesse than just repeating the same action for every hit - and that's what most games do. 3D games have the advantage of changing camera angles.

I generally have an idea that an Upgraded Attack could have a strikingly different pattern to it E.g. a 3hit "Z" slash upgraded to a 5hit slash as a pentagram; instead of a "Z" followed by a double slash "=".
And since it's in 2D I should make use effects like strobing after images, fading in and out and doing something impossible like being in 2 places at once performing a "X" attack.
I'm taking as many notes as I can while watching attack exhibitions on youtube.
 
coyotecraft":1k1lsn61 said:
Like, Fire Emblem's animations are snappy. But I'll also be extending the number attacks as weapons level up. I want it to have more finesse than just repeating the same action for every hit - and that's what most games do. 3D games have the advantage of changing camera angles.
When I was doing the OpenGL RPG Maker API one of the things I wanted to implement more than anything else was a simple battle system with a 3D camera. Maybe something like a 3D-version of Golden Sun's battle camera.

The added motion gives so much more emphasis to the action.
 

Thank you for viewing

HBGames is a leading amateur video game development forum and Discord server open to all ability levels. Feel free to have a nosey around!

Discord

Join our growing and active Discord server to discuss all aspects of game making in a relaxed environment. Join Us

Content

  • Our Games
  • Games in Development
  • Emoji by Twemoji.
    Top