• barosl 19 hours ago

I tested the demo at https://moq.dev/publish/ and it's buttery as hell. Very impressive. Thanks for the great technology!

Watching the Big Buck Bunny demo at https://moq.dev/watch/?name=bbb on my mobile phone leaves a lot of horizontal black lines. (Strangely, it is OK on my PC despite using the same Wi-Fi network.) Is it due to buffer size? Can I increase it client-side, or should it be done server-side?

Also, thanks for not missing South Kora in your "global" CDN map!

• Twirrim an hour ago

Chrome on my oneplus ten, I get flickering black lines routinely. The fact they're going from somewhere along the top, down towards the right makes me wonder if it's a refresh artifact maybe? It's sort of like the rolling shutter effect

• solardev 14 hours ago

Is it Chrome only? On Android Firefox it just says no browser support :(

• seany 12 hours ago

Same here

• jamiek88 an hour ago

Same on safari

• kixelated 19 hours ago

Horizontal black lines? Dunno what that could be about, we render to a <canvas> element which is resized to match the source video and then resized again to match the window with CSS.

• chrismorgan 13 hours ago

What’s that like for performance and power usage? I understand normal videos can generally be entirely hardware-accelerated so that the video doesn’t even touch the CPU, and are passed straight through to the compositor. I’m guessing with this you’re stuck with only accelerating individual frames, and there’ll be more back and forth so that resource usage will probably be a fair bit higher?

An interesting and unpleasant side-effect of rendering to canvas: it bypasses video autoplay blocking.

• kixelated 12 hours ago

It's all hardware accelerated, assuming the VideoDecoder has hardware support for the codec. VideoFrame is available in WebGL and WebGPU as a texture or gpu-buffer. We're only rendering after a`requestAnimationFrame` callback so decoded frames may get automatically skipped based on the display frame rate.

I don't think the performance would be any worse than the <video> tag. The only exception would be browser bugs. It definitely sounds like the black bars are a browser rendering bug given it's fine when recorded.

• DaleCurtis 10 hours ago

Unfortunately canvas (rgb'ish) can't overlay as efficiently as <video> (yuv'ish), so there is some power cost relative to the lowest power video overlays.

It really only matters in long form content where nothing else on the page is changing though.

• kixelated 12 hours ago

Oh and the autoplay restrictions for <video> don't apply when muted.

• chrismorgan 12 hours ago

Depends on your configuration. Firefox has a “block audio and video” option. Which this bypasses.

• Numerlor 16 hours ago

Doesn't show up on screen capture, but there's random rolling quickly flickering lines on my phone, kinda like from analog distortion on old tvs

• chronicler 18 hours ago

I have got same issue with the black lines

• nine_k 10 hours ago

The page mentions a lot of Rust code and WASM. Maybe your phone's CPU cannot run WASM fast enough?

My Samsung S20 shows no black lines.

• TheMrZZ 6 hours ago

My Samsung S24 Ultra shows black lines too, on Chrome and Samsung Internet.

• bb88 13 hours ago

On a mac book air m4 with a 600mbps connection, it's instantaneous and amazing.

• tonyhart7 10 hours ago

with this pc spec and internet speed, I expect its "normal"

• ofrzeta 9 hours ago

I have the same experience on a Macbook Air M1 (I don't think that matters at all) and 100 MBit/s DSL.

• stronglikedan 19 hours ago

I don't get the black lines on Android/Chrome but it doesn't respect my aspect ratio when I go full screen. Instead of adding black bars to the sides, it excludes the top and bottom of the video completely.

• kixelated 19 hours ago

I am bad at CSS.

• Waterluvian 15 hours ago

Managing aspect ratios in conjunction with managing a responsive page layout is one of the darker parts of CSS in my experience. You’re not alone.

• nine_k 10 hours ago
• Waterluvian 5 hours ago

I wish it was true but there’s so many times aspect-ratio still doesn’t work. It’s a pretty weak property so things like flexbox parents will quickly cause it to be ignored.

• cchance 15 hours ago

Holy shit that starts streaming fast! like WTF

• otterley 19 hours ago

There's a whole "why should I care?" section in this breathless post that doesn't explain how Media over QUIC benefits either media publishers or end users - the two most important (perhaps the only important) parties involved in this exchange. So, why should I care?

• kixelated 19 hours ago

My fault, I trying too hard to avoid rehashing previous blog posts: https://moq.dev/blog/replacing-webrtc/

And you're right that MoQ primarily benefits developers, not end users. It makes it a lot easier to scale and implement features; indirect benefits.

• perbu 9 hours ago

End-to-end (glass-to-glass) latency is substantially better. Mostly because the protocol isn't request/response any more.

• englishm 21 hours ago

Hi! Cloudflare MoQ dev here, happy to answer questions!

Thanks for the award, kixelated. xD

• VoidWhisperer 20 hours ago

> No head-of-line blocking: Unlike TCP where one lost packet blocks everything behind it, QUIC streams are independent. A lost packet on one stream (e.g., an audio track) doesn't block another (e.g., the main video track). This alone eliminates the stuttering that plagued RTMP.

It is likely that I am missing this due to not being super familiar with these technologies, but how does this prevent desync between audio and video if there are lost packets on, for the example, the audio track, but the video track isn't blocked and keeps on playing?

• englishm 20 hours ago

Synchronized playback is usually primarily a player responsibility, not something you should (solely) rely on your transport to provide. We have had some talk about extensions to allow for synchronizing multiple tracks by group boundaries at each hop through a relay system, but it's not clear if that's really needed yet.

Essentially though, there are typically some small jitter buffers at the receiver and the player knows how draw from those buffers, syncing audio and video. Someone who works more on the player side could probably go into a lot more interesting detail about approaches to doing that, especially at low latencies. I know it can also get complicated with hardware details of how long it takes an audio sample vs. a video frame to actually be reproduced once the application sinks it into the playback queue.

• vlovich123 14 hours ago

If you’re delivering audio and video separately the blocking is irrelevant for needing to solve synchronization. That’s why some amount of buffering (a few frames of video at least) on the receiver is needed to hide the jitter between packets / make sure you have the video. You can go super low latency with no buffering but then you need to drop out video / audio when issues occur and those will be visible as glitches - depends on how good your network is.

• kixelated 20 hours ago

Each track is further segmented into streams. So you can prioritize new > old, in addition to audio > video.

• madsushi 20 hours ago

Depending on the streaming protocol (eg WARP), you can specify that the tracks (audio vs video) need to be time-aligned, so each group (chunk of video or audio) starts at the same time and lasts the same length. I think this means you'll get resync'd at the start of the next group.

• ddritzenhoff 18 hours ago

Hi, I've got one.

Does your team have any concrete plans to reduce the TCP vs. QUIC diff with respect to goodput [1]? The linked paper claims seeing up to a 9.8% video bitrate reduction from HTTP/2 (TCP) to HTTP/3 (QUIC). Obviously, MoQ is based on a slightly different stack, so the results don't exactly generalize. I can imagine the problems are similar, though.

(I find this stuff fascinating, as I spent the last few months investigating the AF_XDP datapath for MsQuic as part of my master's thesis. I basically came to the conclusion that GSO/GRO is a better alternative and that QUIC desperately needs more hardware offloads :p)

[1]: https://arxiv.org/pdf/2310.09423

• johncolanduoni 17 hours ago

QUIC implementations are definitely not tuned well in practice for 600Mbps flows on low latency, low loss networks, as the paper attests. But I don’t think almost any uses of video streaming fit that bill. Even streaming 4K video via Netflix or similar is tens of Mbps. In general if you don’t have loss or the need to rapidly establish connections, QUIC performance is not even theoretically better, let alone in practice.

P.S. if there’s a public link to your masters thesis - please post it! I’d love to read how that shook out, even if AF_XDP didn’t fit in the end.

• englishm 9 hours ago

Good question! I can't speak concretely to our plans for optimizations at that level of the stack at this stage, but it's true that speaking broadly QUIC does currently lag behind some of the performance optimizations that TCP has developed over the years, particularly in the area of crypto where hardware offload capabilities can have a major impact.

The good news is that there are strong incentives for the industry to develop performance optimizations for HTTP/3, and by also building atop QUIC, MoQ stands to benefit when such QUIC-stack optimizations come along.

Regarding GSO/GRO - I recently attended an ANRW presentation of a paper[1] which reached similar conclusions regarding kernel bypass. Given the topic of your thesis, I'd be curious to hear your thoughts on this paper's other conclusions.

[1]: https://dl.acm.org/doi/10.1145/3744200.3744780

• torginus 20 hours ago

Hi! I have a few :)

How close are we to having QUIC actually usable in browsers (meaning both browsers and infrastructure supports it, and it 'just works')

How does QUIC get around the NAT problem? WebRTC requires STUN/TURN to get through full cone NAT, particularly the latter is problematic, since it requires a bunch of infra to run.

• englishm 20 hours ago

QUIC is already quite widely used! We see close to 10% of HTTP requests using HTTP/3: https://radar.cloudflare.com/adoption-and-usage

As for the NAT problem, that's mainly an issue for peer-to-peer scenarios. If you have a publicly addressable server at one end, you don't need all of the complications of a full ICE stack, even for WebRTC. For cases where you do need TURN (e.g. for WebRTC with clients that may be on networks where UDP is completely blocked), you can use hosted services, see https://iceperf.com/ for some options.

And as for MoQ - the main thing it requires from browsers is a WebTransport implementation. Chrome and Firefox already have support and Safari has started shipping an early version behind a feature flag. To make everything "just work" we'll need to finish some "streaming format" standards, but the good news is that you don't need to wait for that to be standardized if you control the original publisher and the end subscriber - you can make up your own and the fan out infrastructure in the middle (like the MoQ relay network we've deployed) doesn't care at all what you do at that layer.

• torginus 19 hours ago

Thanks for the answer!

Unfortunately the NAT problem is more common than you think :( Lot's of corporate networks use full cone NAT (I know ours does), and so does AWS (if you don't have a public IP, but go through igw), so some sort of NAT punchthrough seems to be necessary for WebRTC.

I wonder if WebTransport has its own solution to the problem.

But I guess you can always rely on turn - by the way, does MoQ have some sort of ICE negotiation mechanism, or do we need to build that on top?

• kixelated 18 hours ago

I answered in another reply, but client -> server protocols like TCP and QUIC don't have an issue traversing NATs. The biggest problem you'll run into are corporate firewalls blocking UDP, but hopefully HTTP/3 adoption helps that (UDP :443, same as WebTransport).

• kixelated 20 hours ago

Chrome and Firefox support WebTransport. Safari has announced intent to support it and they already use QUIC under the hood for HTTP/3.

Cloud services are pretty TCP/HTTP centric which can be annoying. Any provider that gives you UDP support can be used with QUIC, but you're in charge of certificates and load balancing.

QUIC is client->server so NATs are not a problem; 1 RTT to establish a connection. Iroh is an attempt at P2P QUIC using similar techniques to WebRTC but I don't think browser support will be a thing.

• valorzard 20 hours ago

Last I checked, Iroh is gonna use WebRTC datachannels to run QUIC over SCTP

• kixelated 19 hours ago

That is all sorts of miserable. I had an initial prototype that emulated UDP over SCTP, running QUIC (without encryption) on top. The problem is that SCTP becomes the bottleneck, plus it's super complicated.

I immediately jumped ship to WebTransport when Chrome added support. But I suppose there's no other option if you need P2P support in the browser.

• riedel 9 hours ago

And while WebRTC solves some rather hard problems like P2P transfers, the beauty of DASH is that is can rely on existing servers and clients. So I am also quite puzzled on the comparison. Particularly as the post do not get into much detail on the path forward. I feel sometimes we are rather getting back to an AOL style Internet, that just connects dedicated clients to a CDN.

• tschellenbach 21 hours ago

hi, I work on our webrtc streaming over at getstream.io

webrtc has a lot of annoying setup. but after it connects it offers low latency. how do you feel MoQ compares after the connection setup is completed? any advantages/ any issues?

• kixelated 21 hours ago

QUIC/WebTransport gives you the ability to drop media, either via stream or datagrams, so you can get the same sort of response to congestion as WebRTC. However, one flaw with MoQ right now is that Google's GCC congestion controller prioritizes latency over throughput, while QUIC's TCP-based congestion controllers prioritize throughput over latency. We can improve that on the server side, but will need browser support on the client side.

As for the media pipeline, there's no latency on the transmission side and the receiver can choose the latency. You literally have to build your own jitter buffer and choose when to render individual frames.

• combyn8tor 19 hours ago

Is the load balancing of the relays out of scope? It doesn't seem to be addressed in the write up unless I missed it.

• kixelated 19 hours ago

EDIT: Sorry I just noticed this was directed to Cloudflare. They're using the same architecture as Cloudflare Realtime, their WebRTC offering.

`relay.moq.dev` currently uses GeoDNS to route to the closest edge. I'd like to use anycast like Cloudflare (and QUIC's preferred_address), but cloud offerings for anycast + UDP are limited.

The relays nodes currently form a mesh network and gossip origins between themselves. I used to work at Twitch on the CDN team so I'd like to eventually add tiers, but it's overkill with near zero users.

The moq-relay and terraform code is all open source if you're super curious.

• nonane 19 hours ago

Home much success have you have with GeoDNS? We've seen it fail when users are using privacy respecting resolvers like 1.1.1.1. It gets the continent right but fails on city/state level.

• kixelated 18 hours ago

It works well right now because there's only one edge per continent. But if I had traffic, anycast is definitely the way to go.

• bushbaba 13 hours ago

Anycast can have serious reliability challenges. It was common at GCP for a small QPS user of anycast to have their Load Balancers nuked in a given pop as it was backed by a single machine. But BGP showed it as still the best route. The major DNS based offerings don't have such issues.

• kixelated 12 hours ago

QUIC has support for preferred address, where anycast is used for the QUIC handshake then the connection migrates to a unicast address. It still has issues but it's nice to have sticky established connections and avoid flapping mid connection.

• wbl 12 hours ago

I work for a CDN that does DNS steering. DNS record lifetimes are nonzero and can be surprisingly long. But you do get some very fine control over where data goes if resolvers cooperate.

• englishm 19 hours ago

I plan to cover more of the internal implementation details at a future date, possibly at a conference this fall..

But I can at least say that we use anycast to route to a network-proximal colo.

• evilmonkey19 6 hours ago

I love this project and I've been reading kixelated blog from time to time. I also follow him on Github.

First of all, congrats for such nice work both kixelated and Cloudflare. I have a question regarding live-streaming. Usually live-streaming is seen but hundreds or thousands of people at the same time. Are there any idea to implement/possibility to use multicast with MoQ? The issue before was that TCP was used for HTTP/1.1 and HTTP/2, but now using UDP for HTTP/3 seems like a feasible idea. I would like to hear your thoughts on that. I know folks at Akamai and BBC were working on this.

• kixelated 2 hours ago

Thanks!

You don't need multicast! CDNs effectively implement multicast, with caching, in L7 instead of relying on routers and ISPs to implement it in L3. That's actually what I did at Twitch for 5 years.

In theory, multicast could reduce the traffic from CDN edge to ISP, but only for the largest broadcasts of the year (ex. Superbowl). A lot of CDNs are getting around this by putting CDN edges within ISPs. The smaller events don't benefit because of the low probability of two viewers sharing the same path.

There's other issues with multicast, namely congestion control and encryption. Not unsolvable but the federated nature of multicast makes things more difficult to fix.

Multicast would benefit P2P the most. I just don't see it catching on given how huge CDNs have become. Even WebRTC, which would benefit from multicast the most and uses RTP (designed with multicast in mind) has shown no interest in supporting it. But I did hear a rumor that Google was using multicast for Meets within their network so maaaybe?

• parhamn 17 hours ago

That "just announced" link is really good if you have no idea what this is about: https://blog.cloudflare.com/moq/ (I missed it)

• the8472 6 hours ago

> Sub-second latency at broadcast scale

Alas, no actual multicast.

• lxe 17 hours ago

> It's not just another protocol; it's a new design philosophy

My AI senses perked up at this one...

• brycewray 20 hours ago

Semi-related and just FYI for Firefox users who visit Cloudflare-hosted, HTTP/3-using sites:

https://bugzilla.mozilla.org/show_bug.cgi?id=1979683

• englishm 19 hours ago

Looks like it might be a happy eyeballs issue? I'll pass it along to folks who would know more about what that might be, thanks.

• Nathan2055 17 hours ago

Hmm…do you have the in-browser DNS over HTTPS resolver enabled? I personally can't reproduce this, but I'm using DoH with 1.1.1.1.

I've noticed that both Chrome and Firefox tend to have less consistent HTTP/3 usage when using system DNS instead of the DoH resolver because a lot of times the browser is unable to fetch HTTPS DNS records consistently (or at all) via the system resolver.

Since HTTP/3 support on the server has to be advertised by either an HTTPS DNS record or a cached Alt-Svc header from a previous successful HTTP/2 or HTTP/1.1 connection, and the browsers tend to prefer recycling already open connections rather than opening new ones (even if they would be "upgraded" in that case), it's often much trickier to get HTTP/3 to be used in that case. (Alt-Svc headers also sometimes don't cache consistently, especially in Firefox in my experience.)

Also, to make matters even worse, the browsers, especially Chrome, seem to automatically disable HTTP/3 support if connections fail often enough. This happened to me when I was using my university's Wi-Fi a lot, which seems to block a large (but inconsistent) amount of UDP traffic. If Chrome enters this state, it stops using HTTP/3 entirely, and provides no reasoning in the developer tools as to why (normally, if you enable the "Protocol" column in the developer tools Network tab, you can hover over the listed protocol to get a tooltip explaining how Chrome determined the selected protocol was the best option available; this tooltip doesn't appear in this "force disabled" state). Annoyingly, Chrome also doesn't (or can't) isolate this state to just one network, and instead I suddenly stopped being able to use HTTP/3 at home, either. The only actual solution/override to this is to go into about:flags (yes, I know it's chrome://flags now, I don't care) and make sure that the option for QUIC support is manually enabled. Even if it's already indicated as "enabled by default", this doesn't actually reflect the browser's true state. Firefox also similarly gives up on HTTP/3, but its mechanism seems to be much less "sticky" than Chrome's, and I haven't had any consistent issues with it.

To debug further: I'd first try checking to see if EncryptedClientHello is working for you or not; you can check https://tls-ech.dev to test that. ECH requires HTTPS DNS record support, so if that shows as working, you can ensure that your configuration is able to parse HTTPS records (that site also only uses the HTTPS record for the ECH key and uses HTTP/1.1 for the actual site, so it's fairly isolated from other problems). Next, you can try Fastly's HTTP/3 checker at https://http3.is which has the benefit of only using Alt-Svc headers to negotiate; this means that the first load will always use HTTP/2, but you should be able to refresh the page and get a successful HTTP/3 connection. Cloudflare's test page at https://cloudflare-quic.com uses both HTTPS DNS records and an Alt-Svc header, so if you are able to get an HTTP/3 connection to it first try, then you know that you're parsing HTTPS records properly.

Let me know how those tests perform for you; it's possible there is an issue in Firefox but it isn't occurring consistently for everyone due to one of the many issues I just listed.

(If anyone from Cloudflare happens to be reading this, you should know that you have some kind of misconfiguration blocking https://cloudflare-quic.com/favicon.ico and there's also a slight page load delay on that page because you're pulling one of the images out of the Wayback Machine via https://web.archive.org/web/20230424015350im_/https://www.cl... when you should use an "id_" link for images instead so the Internet Archive servers don't have to try and rewrite anything, which is the cause of most of the delays you typically see from the Wayback Machine. (I actually used that feature along with Cloudflare Workers to temporarily resurrect an entire site during a failed server move a couple of years back, it worked splendidly as soon as I learned about the id_ trick.) Alternatively, you could also just switch that asset back to https://www.cloudflare.com/img/nav/globe-lang-select-dark.sv... since it's still live on your main site anyway, so there's no need to pull it from the Wayback Machine.)

I've spent a lot of time experimenting with HTTP/3 and it's weird quirks over the past couple of years. It's a great protocol, it just has a lot of bizarre and weirdly specific implementation and deployment issues.

• brycewray 29 minutes ago

Great details; thanks!

> Hmm…do you have the in-browser DNS over HTTPS resolver enabled? I personally can't reproduce this, but I'm using DoH with 1.1.1.1.

Yes, using DoH and Cloudflare (1.1.1.1). Have also tried it with 1.1.1.1 turned off; no differences.

As for the other suggestions, my results were the same with Firefox on both macOS and Fedora Linux:

- https://tls-ech.dev - EncryptedClientHello works on first try.

- https://http3.is - HTTP/3 works on second or third soft refresh.

- https://cloudflare-quic.com - (This is the one I reported initially) Stays at HTTP/2 despite numerous refreshes, soft or hard.

• simmervigor 13 hours ago

> If anyone from Cloudflare happens to be reading this, you should know that you have some kind of misconfiguration

Thanks for the detailed information. I'm a someone from Cloudflare responsible for this, we'll get it looked at.

• vient 20 hours ago

Limited to macOS? Does not reproduce in FF 141 and 142 on Windows.

• joshcartme 17 hours ago

It reproduces for me in FF 142 on Windows. When I first went to https://cloudflare-quic.com/ it said HTTP/3, but after a few hard refreshes it says HTTP/2 and hasn't gone back to 3

• vient 17 hours ago

Oh, I see - hard refresh consistently shows HTTP/2 but after one or two soft refreshes it becomes HTTP/3 for me until next hard refresh.

Edit: it is always second soft refresh for me that starts showing HTTP/3. Computers work in mysterious ways sometimes.

• joshcartme 17 hours ago

I restarted FF and am now seeing something similar. Hard refreshing alternates between 2 and 3, and soft refreshes quickly get back to 3 most of the time

• csinode 14 hours ago

For me (FF nightly on Linux) a hard refresh has a roughly 50/50 chance of choosing HTTP3 or HTTP2.

• brycewray 18 hours ago

No, I have also observed it in Firefox (via Flatpak) on Fedora Linux 42. When I filed the original GH issue that webcompat-bot turned into this Bugzilla item (https://github.com/webcompat/web-bugs/issues/168913), my full report didn't make it into Bugzilla.

• xer0x 17 hours ago

Awesome! Great job.

I read this as first QUIC CDN, and thought that can't be true. Dug a little deeper and learned that Media over QUIC is it's own thing. Looks pretty cool.

• hn-user-42 21 hours ago

I found your first app interesting. You should submit it as Show HN.

I used to work in live video platform. I have found MoQ interesting enough to work on it again.

• kixelated 20 hours ago

Yeah I will soon, but I could also use the time to fix up some more stuff.

• wiradikusuma 20 hours ago

Since it's "under the hood" as long as major browsers support it (I can't even find in caniuse?) it's good.

And I guess if webview engines like Microsoft Edge WebView2 supports it then developers can use it immediately (wrapping it).

But how about from the other side, I guess OBS and YouTube must start supporting it to actually be useful?

• englishm 20 hours ago

So... MoQ represents a bit of a moving away from the all-in-one "black box" of web APIs like WebRTC. From the browser perspective, the main thing that matters is the WebTransport API. Using MoQT in conjunction with that WebTransport API, you now have various options for rendering the video as a player, for example: WebCodecs. But, if you can afford a bit more latency, you can also use APIs like MSE for playback and be able to use DRM.

And yeah, being able to publish from something like OBS is something I worked on before joining Cloudflare, but it depends a lot on what you do at the "streaming format" layer which is where all the media-aware details live. That layer is still evolving and developing with WARP being the leading spec so far. As that gels more, it'll make sense to bake things into OBS, etc. Already today though you can use Norsk (https://norsk.video/) to publish video using a rudimentary fMP4-based format similar to early versions of the WARP draft.

As for YouTube, Google has some folks who have been very active contributors to MoQT, but I'm not certain exactly how/where/when they plan to deploy it in products like YouTube.

• kixelated 19 hours ago

https://caniuse.com/webtransport https://caniuse.com/webcodecs

Technically, WebCodecs is not required since you can use MSE to render, but it's more work and higher latency.

Working on a WebSocket fallback and OPUS encoder in WASM to support Safari.

• madsushi a day ago

Related, the WARP streaming protocol, as a candidate for what would ride over MoQ: https://datatracker.ietf.org/doc/draft-ietf-moq-warp/

• englishm 20 hours ago

Yes, exactly! I mention that in the post. Streaming formats are where a lot of interesting decisions can be made about how best to optimize QoE for different use cases. MoQT is designed have enough levers to pull to enable a lot of clever tricks across a wide gamut of latency targets, while also being decoupled from all of the media details so we can get good economies of scale sharing fan out infrastructure.

WARP's development (at the IETF) up until now has been largely spearheaded by Will Law, but it's an IETF spec so anyone can participate in the working group and help shape what the final standard looks like. WARP is a streaming format designed mainly for live streaming use cases, and builds on a lot of experience with other standards like DASH. If that doesn't fit your use case, you can also develop your own streaming format, and if it's something you think others could benefit from, too, you could bring it to the IETF to standardize.

• kixelated 20 hours ago

Hi I originally wrote WARP and used something similar at Twitch. It supports CMAF segments, so the media encoding is backwards compatible with HLS/DASH and can share a cache, which is a big deal for a gradual production rollout.

• madsushi 20 hours ago

Thanks for the info! I was reading up on CMAF after seeing it mentioned on your blog.

• kixelated 19 hours ago

Yeah, and CMAF is just a fancy word for fMP4. The f in fMP4 meaning an MP4 file that has been split into fragments, usually at keyframe boundaries, but fragments can be as small as 1 frame if you're willing to put up with the overhead.

The Big Buck Bunny example on the website is actually streamed using CMAF -> MoQ code I wrote.

• valorzard 20 hours ago

Quic (ha) thing:

Webtransport on Firefox currently has issues See: https://bugzilla.mozilla.org/show_bug.cgi?id=1969090

• hn-user-42 21 hours ago

Some interesting stuffs and more technical details

https://moq.dev/blog/first-cdn/

https://moq.dev/blog/first-app/

• dang 20 hours ago

(the first URL is same as OP because we merged this comment from a different thread - https://news.ycombinator.com/item?id=44984785)

• englishm 20 hours ago

Thank you dang!

• tamimio 17 hours ago

Awesome news, been following this for some time!

• templar_snow 19 hours ago

Glorious.

• NooneAtAll3 13 hours ago

I'm terrified of what will happen when cloudflare-monopoly will eventually enshittify

• MattRix 20 hours ago

A small note: I found the styling of your post made it annoying to read. You shouldn’t highlight key words so strongly, especially using the same green you’re using for links. It makes it take mental effort to tell them apart.

• pphysch 21 hours ago

Could MoQ be used for low-latency multiplayer/game networking in the browser?

• englishm 21 hours ago

You might be interested in looking at this RTC example for ideas about how to make bi-directional data flows for arbitrary groups of participants (or players) work through a relay.

https://hang.live/

It uses a feature we haven't yet implemented, but we're thinking about how we might implement it at our scale, SUBSCRIBE_ANNOUNCES[1].

[1]: https://www.ietf.org/archive/id/draft-ietf-moq-transport-12....

• englishm 21 hours ago

It could! We've mainly been focused on using it for audio and video for live streaming and RTC use cases, but the MoQT layer is very intentionally decoupled from the media details so the fan out infrastructure could actually be used for a lot of different things. You'd need to decide how you want to map your data to MoQT objects, groups, tracks, etc.

• pphysch 21 hours ago

Sweet, thanks! I'll tinker with it.

• dang 21 hours ago

Comments moved to https://news.ycombinator.com/item?id=44984785, which has the original source. (Edit: I mean the announcement this article is reporting on.)

Submitters: "Please submit the original source. If a post reports on something found on another site, submit the latter." - https://news.ycombinator.com/newsguidelines.html

Edit: since people seem to agree that this was the wrong move, I'm going to undo it.

Edit 2: undone now!

• kixelated 21 hours ago

Hey dang, I don't think my blog is a repost. It links the Cloudflare announcement but the content is completely different (and actually funny). Is there any way you could restore it?

• dang 21 hours ago

It's not that it's a repost; it's that it's mostly reporting on the content of another article. In such cases it's standard HN practice to prefer the latter, as the guidelines say (and have said since ancient times). It's not that you did anything wrong! and I'm sorry, I know it sucks to have an article doing well on the frontpage and then plummet for no obvious reason. But from a mod practice point of view this is a pretty clear call.

• englishm 21 hours ago

(OP of the Cloudflare blog & submission here) I think kixelated is saying enough stuff beyond our post that he deserves the credit for and this shouldn't be treated as a dupe. (emailed to say as much also)

• kixelated 21 hours ago

I linked the URL to their relay but otherwise all of the libraries, demos, rants, jokes are my own. I can remove the link to their post if that would help, or I could get Cloudflare's blessing. It's just a bit frustrating.

• hn-user-42 21 hours ago

What? did you check the source? he is the original guy.

• dang 21 hours ago

The article is about a Cloudflare announcement. The original source in that case would be the Cloudflare announcement, no?

• therein 21 hours ago

Pretty cool. Correct me if I'm wrong, this only works on Chrome?

• englishm 21 hours ago

Firefox also has WebTransport support and Safari has a work-in-progress implementation behind a developer mode feature-flag. Safari used to not work at all, but I know they've been putting more effort into it lately, so hopefully we'll be able to use MoQ in all three soon!

• therein 21 hours ago

I tried on FF (Floorp fork) and it said no browser support but I have all sorts of extensions like Chameleon etc. so it is probably a problem on my end that interfered with feature detection.

Works really nicely on Chrome, though. Looking forward to the Safari support as I find myself using Orion more and more.

• conradfr 21 hours ago

It works for me on FF macOS but not FF Android.

(meaning the link before the edit, https://moq.dev/watch/)

• kixelated 21 hours ago

Firefox should work, I just don't test it often. Lemme fix.