Common shared WorkerPool for both Potree1 and Potree2 pointclouds#182
Common shared WorkerPool for both Potree1 and Potree2 pointclouds#182Jordan-Lane wants to merge 1 commit intopnext:masterfrom
Conversation
| } | ||
| } | ||
|
|
||
| export class WorkerQueue { |
There was a problem hiding this comment.
Renamed and moved the previous Potree1 WorkerPool -> WorkerQueue. The WorkerQueue is responsible for returning a single worker type
The AsyncBlockingQueue with AutoTerminatingWorkers are awesome. By reusing this functionality as a Queue, we know that workers will be cleaned up when they are not being used
| get isTerminated(): boolean { | ||
| return this.terminated; | ||
| } | ||
| export class WorkerPool { |
There was a problem hiding this comment.
The new WorkerPool is a Singleton and has a pool that consists of multiple WorkerQueues for each worker type. We set a maximum number of workers per queue, and this allows us to manage all the different Queues in a single place
|
|
||
| static get maxLoaderWorkers(): number { | ||
| return BinaryLoader.WORKER_POOL.maxWorkers; | ||
| return WorkerPool.getInstance().maxWorkersPerPool; |
There was a problem hiding this comment.
Previously setting maxLoaderWorkers had no impact on Potree2 pointclouds, which could spawn an unlimited number of workers. Now, we can set the maximum number of workers that each pool is allowed to spawn
|
Hello Jordan, I´m taking care of checking the PRS so I´ll be testing this feature, will get back to you this week. That being said the worker pool is being updated with the new gaussians splats PR, we will discuss this feature after the splats are implemented into the library. |
|
We've run prettier recently. It's highly recommended to first run "npm run format:write", commit and rebase. |
The more complex, but better solution to issue: #180
A common workflow we have encountered is loading both Potree1 and Potree2 pointclouds within the same scene. Currently this makes worker management challenging, as both Potree1 and Potree2 have their own WorkerPools with different implementations.
This PR is to consolidate the two into a single WorkerPool. This allows us to properly set the
maxLoaderWorkers, and managed ALL the workers in a single place.This is a slightly larger PR, so I've outline the main changes below. Please let me know if you have any questions or concerns, I am very open to making changes so we can get this merged in!