Skip to content

Convert more functions to async/await #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

Borewit
Copy link

@Borewit Borewit commented Jul 16, 2025

No description provided.

Copy link
Owner

@hvianna hvianna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave it a quick look and made a couple of comments in some places I think there could be some issues. Thanks again for helping me improve my messy code! 😅

if ( FILE_EXT_AUDIO.includes( extension ) || ! extension ) {
// disable retrieving metadata of video files for now - https://github.com/Borewit/music-metadata-browser/issues/950
trackData.retrieve = 1; // flag this item as needing metadata
await retrieveMetadata();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really need to wait for retrieveMetadata(), as it doesn't return any value and can run in parallel. I could be mistaken, but I think this would slow down the process of adding multiple files to the play queue.

Copy link
Author

@Borewit Borewit Jul 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see no limit with this recursion, so this could potentially spin up a large number of tasks, equal to the total queued items. This can easily drain memory and become a bottleneck, rather then a speed boost. I strongly advice against that. If you do:

  1. Limit the number of parallel tasks to max 4
  2. Handle the promise in the recursion

It has been proved it can become an issue, and therefor I even documented strategies against it: https://github.com/Borewit/music-metadata?tab=readme-ov-file#how-can-i-traverse-a-long-list-of-files

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The limit control is done in retrieveMetadata() itself. A recursive call is made at the end, after a request resolves.

async function retrieveMetadata() {
	// leave when we already have enough concurrent requests pending
	if ( waitingMetadata >= MAX_METADATA_REQUESTS )
		return;

https://github.com/hvianna/audioMotion.js/blob/a6b6e3e45fd8f0dfa69a485309e53d9969757eb0/src/index.js#L3299C1-L3302C10

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In see.
In that case better to take Math.min(MAX_METADATA_REQUESTS, queue.length) from the queue, and run those. No need to introduce an arbitrary counter for that.

Sorry, I have not analyzed what you do with the remainder of the queue, and how you trigger new queue processing requests.

If the goal is to process the entire queue with respecting MAX_METADATA_REQUESTS in parallel, I suggest using something like p-limit, to easily control the number of parallel async tasks.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In see. In that case better to take Math.min(MAX_METADATA_REQUESTS, queue.length) from the queue, and run those. No need to introduce an arbitrary counter for that.

I don't think it's so simple, as queue here is the entire play queue, not just the files waiting to be parsed for metadata.

Copy link
Author

@Borewit Borewit Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MAX_METADATA_REQUESTS seems to serve a dual role: it limits not only the number of parallel metadata requests, but also the number of tracks in the queue for which metadata will be retrieved. This means only a few tracks will ever get metadata assigned, while the rest remain untouched. Is it really beneficial to prioritize just the first few, while potentially ignoring the rest?

It’s true that metadata retrieval is an expensive operation. The cost breaks down into:

  1. I/O time: reading the file (can benefit from parallelization, especially for remote files)
  2. Processing time: extracting metadata (runs on the main thread, inherently single-threaded in JavaScript)

But more important than parallelism or CPU-bound limits is something that hasn't been discussed yet:

Metadata extraction, even when well-structured, runs on the main thread. If we attempt to process the entire queue at once (even asynchronously), we risk blocking the UI and degrading application responsiveness. The most important goal in my onion, is not get the metadata fast, it’s to do so without compromising user experience.

Correct me if I am wrong, the rationale behind MAX_METADATA_REQUESTS is to balance responsiveness with progressive enhancement. But currently, it limits the total number of tracks processed, not just concurrency, which results in metadata being retrieved only for a small portion of the playlist.

Taking a step back, and doing one step at a time:

I’ve updated the code to restore controlled parallel processing, ensuring that we still respect performance limits while improving structure through async/await. I also removed waitingMetadata, and all promises are now properly tracked.

The main goal of this PR remains improving maintainability and clarity, without sacrificing UX.

Maybe introduce a better way of processing the metadata in a different PR. Maybe run the metadata extraction in a worker thread?

Looking forward to your thoughts, @hvianna.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MAX_METADATA_REQUESTS seems to serve a dual role: it limits not only the number of parallel metadata requests, but also the number of tracks in the queue for which metadata will be retrieved. This means only a few tracks will ever get metadata assigned, while the rest remain untouched.

😮 If it's only retrieving the metadata for the first four entries in the queue, something is off. Its intended purpose is solely to limit the number of concurrent requests, and it should parse all tracks in the queue eventually. You can check how it's working the dev branch.

Maybe introduce a better way of processing the metadata in a different PR. Maybe run the metadata extraction in a worker thread?

Would appreciate your help on that, once we have this current version released!

Copy link
Author

@Borewit Borewit Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well on the dev branch that is what retrieveMetadata does.

But retrieveMetadata is called by addSongToPlayQueue and addSongToPlayQueue is also spinned in parallel.

So on the dev branch the number of parallel tasks spinned up, equal the number of tracks you added, by the total number of tracks remaining without metadata on the queue. Then a lot of things are trying to do the same thing, and somewhere in that mess waitingMetadata limits the number of parallel tasks.

Here Knuth's optimization principle comes into play:

Premature optimization is the root of all evil. ref

My advice:

  1. Use async/await where possible. Easier to read, less likely to make mistakes.
  2. All Promise are handled, including and rejected (errors) are never completely ignored, If you an error in the your browser of unhanded promise, fix it.
  3. Only perform parallel execution when everything else is under control. Good structure will give you more performance advantage, I promise.

src/index.js Outdated
Comment on lines 1243 to 1244
if ( playlistPos > queueLength() - 3 )
await loadNextSong();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this can be nested, as it will change the original logic. Also, in the dev branch loadNextSong() no longer exists - we should use loadSong( NEXT_TRACK ) instead.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean nested in the outer if.. should have included lines 1241 and 1242 as well..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should use loadSong( NEXT_TRACK )

Ah yes inherited that, from my first changes written on the default branch.

I updated the logic, with the difference I use:

await loadSong( NEXT_TRACK );
rather then loadSong( NEXT_TRACK );

Cutting loose promises is usually not a great idea.

Please double check, I don't know what it does. This few lines was the most trick part I changed.
Especially this was nasty: loadSong(0).then( () => resolve(1) ); as kind schedules resolving the promise, but it continues in parallel.

@Borewit Borewit force-pushed the more-async-await branch 3 times, most recently from d24843a to c5bf88d Compare July 18, 2025 08:39
@Borewit Borewit force-pushed the more-async-await branch from c5bf88d to f4fc3dc Compare July 18, 2025 09:28
@Borewit
Copy link
Author

Borewit commented Jul 18, 2025

Setting to one to draft, as discussion regarding queue loop overlaps with #89.
Hence I suggest to first complete #89.

@Borewit Borewit marked this pull request as draft July 18, 2025 09:48
if ( FILE_EXT_AUDIO.includes( extension ) || ! extension ) {
// disable retrieving metadata of video files for now - https://github.com/Borewit/music-metadata-browser/issues/950
trackData.retrieve = 1; // flag this item as needing metadata
await retrieveMetadata();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MAX_METADATA_REQUESTS seems to serve a dual role: it limits not only the number of parallel metadata requests, but also the number of tracks in the queue for which metadata will be retrieved. This means only a few tracks will ever get metadata assigned, while the rest remain untouched.

😮 If it's only retrieving the metadata for the first four entries in the queue, something is off. Its intended purpose is solely to limit the number of concurrent requests, and it should parse all tracks in the queue eventually. You can check how it's working the dev branch.

Maybe introduce a better way of processing the metadata in a different PR. Maybe run the metadata extraction in a worker thread?

Would appreciate your help on that, once we have this current version released!

Comment on lines +3323 to +3338
if ( queueItem.handle ) {
try {
if ( await queueItem.handle.requestPermission() !== 'granted' )
return;

uri = URL.createObjectURL( await queueItem.handle.getFile() );
revoke = true;
}
catch( e ) {
consoleLog(`Error converting queued file="${queueItem.handle.file}" to URI`, e);
return;
}
}

try {
const metadata = await mm.fetchFromUrl( uri, { skipPostHeaders: true } );
if ( metadata ) {
addMetadata( metadata, queueItem ); // add metadata to play queue item
try {
const metadata = await mm.fetchFromUrl( uri, { skipPostHeaders: true } );
Copy link
Owner

@hvianna hvianna Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: just saw your previous comment on this..
Did you inadvertently revert this? It's still using fetchFromUrl(). I think this is now dependent on changes in #89

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy