I'm writing an A* algorithm to my game, the problem is that the algorithm loop causes lag or is too slow when used in a coroutine (Takes about 0.3-0.5 seconds without any yields, 15-20 seconds with 2 yields, about 300*8 - 500*8 iterations to find the path (*8 since I check 8 neighbors for each node in a nested loop).
So I decided I would put it in a separate thread so I could skip the wait/sleep/yield statements and make it run full speed. I'm using a threadpool in C# and queue every pathfinding in there (so everytime I want to find a path I send a queued item into the threadpool, currently I queue 20 functions at the start of the game).
The problem is: It still lags (FPS drop by 20-30 fps).
In the profiler the lag spikes turn up as belonging to "Camera.render", but as soon as all pathfinding is done (20 paths) the lag spikes stops (in reality it decreases bit by bit with every finished thread).
Could it be that the threadpool runs threads before the camera, rendering and other UI and thus makes some parts of the game have to wait? How can I avoid this and keep the speed relevant?
I tried skipping the threadpool and just create new threads and set the priority to below normal, but no difference in lag. I also tried setting the affinity to a specific core using a custom class I found, but it didn't seem to work, and I don't really want 100+ pathfinding functions on a single core.
I'm using Unity 5.2.3f and using a 6 core CPU, I read something about that Unity 5 have increased the use of multithreading, is it possible I now "block" some of these threads from running?
I've noticed that the lag decreases a bit when I use Thread.Sleep(1) (and thus return control to the main thread(?), but I still get the 20 FPS drop). Thread.Sleep(10) = no lag, but renders the use of threads useless (since I wanted more speed, not less).
Last question: Does "infinite"/long while-loops without a wait/sleep freeze your application no matter which thread or core they are on? Is a "full-speed" A* impossible without precomputation?
↧