• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: August 22nd, 2023

help-circle
  • I don’t trust it because there’s no believable plan to make it commercially viable, so it’s just going to end up defunct or enshittified. Mastodon is up front, it’s a volunteer service that you can either pay for or roll the dice on the instance staying up. And there’s a built-in way to move on when one goes down.

    BlueSky is a B-corp, which theoretically means they can say their mission takes priority if sued by an investor in court, but doesn’t in any way require them to make it the primary goal, and the reality of funding and money and investors means that’s almost certainly not going to happen.










  • It’s interesting how the most open instances aren’t the biggest ones with no user restrictions, but the smaller instances that no one has issues with. I moved away from LW because of performance issues, but I’m happy to be able to see both LW posts and BH posts. Sopuli has defederated from some instances, but I’m happy with their choices so it’s as unrestricted as I want it to be. Others would choose an even less restrictive/restricted small instance, or like yourself just run your own to have complete freedom.


  • This definitely feels like academics squabbling with those in an associated research area over who gets the tiny sliver of funding trickling in to them while vastly more resources are directed elsewhere to actively harmful pursuits. Their beef should be with fossil fuel subsidies, or at least the people in the geology department being funded to figure out better fracking methods. I just can’t see a world where money spent on reef restoration work is actually the critical issue hindering key climate work.

    Beyond that, the “maybe reefs will just adapt” message near the end seems way more dangerous to a healthy climate than any issues they danced around with coral restoration. That’s exactly the argument made by the polluters and much more seductive to policymakers than coral restoration.


  • This is unhinged. Someone building the mainline of an interoperable communication service should absolutely be helping others making software trying to interoperate with it. Complaints can be made about Rochko rejecting PRs, but complaining that other people’s time is going towards a thing they don’t want is insane.

    “So they reached out to us and we had conversations about what they want to do, how they can do it, and we had more detailed conversations about how to do X, how to do Y protocol-wise. We helped them resolve some issues when they launched their first test of the federation because we want to see them succeed with this plan, so we help them debug and troubleshoot some of the stuff that they’re doing. Basically, we’re talking with each other about whatever issues come up.”

    But from the perspective of hundreds of instances have signed the anti-Meta FediPact, and hundreds more are blocking Threads without signing the pact, any resources devoted to to improving the Threads/Mastodon integration are wasted.



  • Did the image get copied onto their servers in a manner they were not provided a legal right to? Then they violated copyright. Whatever they do after that isn’t the copyright violation.

    And this is obvious because they could easily assemble a dataset with no copyright issues. They could also attempt to get permission from the copyright holders for many other images, but that would be hard and/or costly and some would refuse. They want to use the extra images, but don’t want to get permission, so they just take it, just like anyone else who would like an image but doesn’t want to pay for it.


  • In life, people will frequently say things to you that won’t be the whole truth, but you can figure out what’s actually going on by looking at the context of the situation. This is commonly referred to as “being deceptive” or sometimes just “lying”. Corporate PR and salespeople, the ones who put out this press release, do it regularly.

    You don’t need to record content categories of searches to make a good tool for displaying websites, you need it to perform predictions about what users will search for. They’ve already said they wanted to focus on AI and linked to an example of the system they want to improve, it’s their site recommender, complete with sponsored recommendations that could be sold for a higher price if the Mozilla AI could predict that “people in country X will soon be looking for vacations”.


  • Zaktor@sopuli.xyztoTechnology@lemmy.mlFirefox to collect your (anonymized) search data
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    6 months ago

    The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.


  • You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.

    In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

    Then it’s open source enough to live in my browser.

    So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.



  • Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.