• msage@programming.dev
    link
    fedilink
    arrow-up
    39
    ·
    15 days ago

    This is literally me at every possible discussion regarding any other RDBMS.

    My coworkers joked that I got paid for promoting Postgres.

    Then we switched from Percona to Patroni and everyone agreed that… fuck yes, PostgreSQL is the best.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 days ago

      I used to agree, but recently tried out Clickhouse for high ingestion rate time series data in the financial sector and I’m super impressed by it. Postgres was struggling and we migrated.

      This isn’t to say that it’s better overall by any means, but simply that I did actually find a better tool at a certain limit.

      • qaz@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        15 days ago

        I’ve been using ClickHouse too and it’s significantly faster than Postgres for certain analytical workloads. I benchmarked it and while Postgres took 47 seconds, ClickHouse finished within 700ms when performing a query on the OpenFoodFacts dataset (~9GB). Interestingly enough TimescaleDB (Postgres extension) took 6 seconds.

        Insertion Query speed
        Clickhouse 23.65 MB/s ≈650ms
        TimescaleDB 12.79 MB/s ≈6s
        Postgres - ≈47s
        SQLite 45.77 MB/s1 ≈22s
        DuckDB 8.27 MB/s1 crashed

        All actions were performed through Datagrip

        1 Insertion speed is influenced by reduced networking overhead due to the databases being in-process.

        Updates and deletes don’t work as well and not being able to perform an upsert can be quite annoying. However, I found the ReplacingMergeTree and AggregatingMergeTree table engines to be good replacements so far.

        Also there’s !clickhouse@programming.dev

      • msage@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        15 days ago

        If you can, share your experience!

        I also do finance, so if there is anything more to explore, I’m here to listen and learn.

        • locuester@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          15 days ago

          Clickhouse has a unique performance gain when you have a system that isn’t operational data that is normalized and updated often. But rather tables of timeseries data being ingested for write only.

          An example, stock prices or order books in real-time. Tens of thousands per second. Clickhouse can write, merge, aggregate records really nicely.

          Then selects against ordered data with aggregates are lightning fast. It has lots of nuances to learn and has really powerful capability, but only for this type of use case.

          It doesn’t have atomic transactions. Updates and deletes are very poor performing.

        • Tja@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          15 days ago

          For high ingestion (really high) you have to start sharding. It’s nice to have a DB that can do that natively, MongoDB and Influx are very popular, depending on the exact application.

    • cute_noker@feddit.dk
      link
      fedilink
      arrow-up
      4
      ·
      15 days ago

      I have a colleague like that too, and then the other camp that loves MySQL.

      Why do you like postgres

      • msage@programming.dev
        link
        fedilink
        arrow-up
        17
        ·
        edit-2
        15 days ago

        I made several lengthy presentations about many features, mainly those that are/were missing in MySQL.

        In short, MySQL (has been) shit since its inception, with insane defaults and lacking SQL support.

        After Oracle bought it, it got better, but it’s catching up with stuff that Postgres has had for 20+ years (in some cases).

        Also, fuck Oracle, it’s a shit company.

        Edit: if I had to pick the best features I can’t live without, it would be ‘returning’, copy mode and arrays

        • chunkystyles@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          15 days ago

          As a complete newb to Postgres, I LOVE arrays.

          Postgres feels like all of the benefits of a database and a document store.

          • msage@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            15 days ago

            Yeah, that was the goal.

            First make it feature-complete document-oriented database, then make if peroformant.

            And you can feel the benefits in every step of the way. Things just work, features actually complement each other… and there’s always a way to make any crazy idea stick.

      • Overspark@feddit.nl
        link
        fedilink
        arrow-up
        5
        ·
        15 days ago

        I usually tell people running MySQL that they would probably be better off using a NoSQL key-value store, SQLite, or PostgreSQL, in that order. Most people using MySQL don’t actually need an RDBMS. MySQL occupies this weird niche of being optimised for mostly reads, not a lot of concurrency and cosplaying as a proper database while being incompatible with SQL standards.

      • msage@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        14 days ago

        I mean, with mysql_fwd, I migrated the data quickly, and apart from manual ‘on duplicate update’ queries (or rare force index) it works the same.

  • merari42@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    15 days ago

    As a (data) scientist I am not super familiar with most databases, but duckdb is great for what I need it for.

  • ohshit604@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    15 days ago

    Things happen magically with docker. Container needs PostgreSQL? Expose the port, define a volume, username and password, connect service to that port, forget PostgreSQL’s existence until data corruption.

    • Sneezycat@sopuli.xyz
      link
      fedilink
      arrow-up
      5
      ·
      15 days ago

      Not data corruption, but I replaced by mistake my .env file for authentik, containing the password for the postgresql database…

      Cue a couple existential crisis for not having set up backups, thinking about nuking the whole installation, learning about postgresql, and finally managing to manually set another password.

      Yeah, I feel several years older now…

    • cm0002@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      15 days ago

      forget PostgreSQL’s existence until data corruption.

      Oh, so about 2 hours then LMAO

    • Valorie12@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      15 days ago

      15 years ago I called it S-Q-L and then I was told that it’s wrong and it’s “Sequel”, and they kept calling it Sequel in college so for the past 10 years I’ve called it Sequel, My-Sequel, Sequel-lite, Postgres, transact-sequel, etc. Now y’all are telling me it’s not Sequel