![]() ![]() I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosity Part 1: What fuzzy proxies are people using and why would they be systematically overweighted? I’m pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate. If it’s decision relevant or otherwise important to know how much to trust a person or organization, I think it’s a mistake to rely heavily on the above indicators, or on the “general feeling” in EA. It’s easy to overestimate what you would know if there’s a bad thing to know. Negative opinions (anywhere from “that person not so great” to “that organization potentially quite sketch, but I don't have any details”) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we don’t have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant people’s views on the people, projects and organizations in question. I think that it’s likely common to overweight those signals of approval and the absence of disapproval.Įspecially post-FTX, I’d like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely. There’s a feeling that people, organizations and projects are broadly good and reasonable (often true!) that’s based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. There’s a lot of “social sense of trust” in EA, in my experience. Interested in whether this resonates with people's experience! Short version: A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this. This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. On the surface level it talks a lot about EA, but I a lot of it also straightforwardly implies to the AI Alignment or Rationality communities, and as such are also of relevance to a lots of readers on LessWrong.īelow I shamelessly copied over the whole post content (except the footnotes, since they were hard to copy-paste): I actually think it's a quite important post, so I am signal-boosting it here. Chana from the CEA Community Health team posted this to the EA Forum, where it sadly seems to have not gotten a lot of traction. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |