• pimeys@lemmy.nauk.io
    link
    fedilink
    arrow-up
    42
    ·
    3 months ago

    That’s why you write your protocol as a sync library, then implement the async IO separately and mapping the data over the protocol modules.

      • pimeys@lemmy.nauk.io
        link
        fedilink
        arrow-up
        42
        arrow-down
        1
        ·
        3 months ago

        So basically your typical network protocol is something that converts an async stream of bytes into things like Postgres Row objects. What you do then is you write a synchronous library that does the byte conversion, then you write an asynchronous library that talks with the database with async functions, but most of the business logic is sync for converting the data coming from the async pipe.

        Now, this can also be done in a higher level application. You do a server that is by nature async in 2024. Write the server part in async, and implement a sync set of mapping functions which take a request coming in and returns a response. This can be sync. If you need a database, this sync set of functions maps a request to a database query, and your async code can then call the database with the query. Another set of sync functions maps the database result into http response. No need to color everything async.

        The good part with this approach is that if you want to make a completely sync version of this library or application, you just rewrite the async IO parts and can reuse all the protocol business logic. And you can provide sync and async versions of your library too!

        • azimir@lemmy.ml
          link
          fedilink
          arrow-up
          10
          ·
          3 months ago

          This approach is so much nicer than the threading/queuing approaches we used to have. One async showed up, a ton of the work go pulled out of protocol handing and distributed subsystem sync efforts.

          Long lived the multi threaded C++ server buffer! Today, async beging to rule the roost.

          • pimeys@lemmy.nauk.io
            link
            fedilink
            arrow-up
            4
            ·
            3 months ago

            It kind of fails with certain protocols. I once wrote an async MSSQL client for Rust, and some data doesn’t say its size in the headers. So this kind of forced the business logic to be async too.

            • azimir@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              3 months ago

              Yeah, those durn data size fields. At first you’re like “why would you do this? It’s specified in the spec, right?” Then you start consuming the data stream and go “oh, yeah need this”.

              I was doing some driver work for a real time location tracking board. The serial stream protocol was very well documented and designed. Plenty of byte length count fields, though.

    • psivchaz@reddthat.com
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      3 months ago

      I like async but dislike await. I spend entirely too much time on everything I build trying to maximize how much I can do in parallel because I find it tremendously satisfying.

      • Vent@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        You probably already know this, or are talking about another language, but JavaScript is inherently single threaded, so unless you’re running blocking I/O in parallel, you won’t actually see any performance boost. Service workers get their own thread though.

      • Luvon@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        Await is usually there either because the performance doesn’t matter and the legibility is much higher with it, and/or because there are a series of asynchronous actions that depend on each other and await lets you write them as if they are sync because related to each other they are.