Hello Tobias, i am very interested by the article you wrote with David Althaus because we are working with UOW in Australia to propose a grant on this topic. I'd love to discuss about this more with both of you, is there a way to contact you more directly ? Thanks a lot ! Juliettejordan-pieters on What should CEEALAR be called?
CARE?aarongertler on My New Game On Animal Welfare Just Came Out!
Congratulations on launching the game!
It's exciting to see people with unusual skills (at least "unusual within our community") using them to create EA art, especially when they go on to trial + promote the art in spaces outside the community. I hope the game gets a lot of people to think carefully about animals.briantan on Forum update: New features (June 2021)
Posts created on the prior version of the Forum (pre-November 2018) awarded 10 times their visible karma.
Could you say why this was done then? Was it to encourage posting?aarongertler on Why scientific research is less effective in producing value than it could be: a mapping
"Lack of connection with end-user" reminds me of this essay from a psychology professor who recently left academia.
As someone who wanted to be a psychology professor for a year or so, I felt a spark of recognition when I read this section (emphasis mine):
aarongertler on Linch's Shortform
The stuff we do doesn't matter
The thing that I'll probably most miss about academia is getting to research whatever I'm curious about. I'm not missing it just yet, however. While on the tenure track, I didn't think I was discovering hidden truths about the human mind by studying the American college undergraduate and Prolific.co participant.
Honestly, I'd felt pretty discouraged about research for a while. The things we study tend to have small effects, and when we can't detect those small effects a second time, it can be hard to tell why. (Possible explanations include noise, publication bias, errors in methods, differences in populations due to culture or even the passage of time.) It's why we spend so much time arguing about the fidelity of replication methods and hidden moderators.
Because the things we study have small, purportedly delicate effects, it's rare that we expect to see them applied and working in the real world. It's unpleasant to say it, but I feel that a lot of the research that we do doesn't matter. It's because it doesn't matter that we were able to get all the way into the 2010s before having a replication crisis. If we had screwed up our basic science in physics or biology or chemistry, we would notice pretty quickly when the engineers told us their bridges were collapsing or the crops were dying or the soda pop was going flat. By comparison, very little in social psychology seems to be applied or expected to work in any routinely detectable way.
The lackadaisical response I've sometimes received when raising concerns about papers has further convinced me that most social psych research does not matter. When I email a journal to say "none of these statistics add up" or "these effect sizes are ridiculously big," I often get no reply. Compare this to the sort of all-hands-on-deck response we might get if we found poison in the dog food. It doesn't matter that the product is no good -- we produce it for the sake of producing it, quality irrelevant.
In comparison, the stuff I'm doing as a data scientist isn't glamorous, but it's useful. Some of our projects save the company millions of dollars a year in shipping costs. That's a lot of gasoline and traffic and cardboard and dry ice that we're able to save. Reducing the amount of oil and packaging that gets used up might be the most useful thing I've done in years.
I love this idea! Lots of fun ways to make infographics out of this, too.
Want to start out by turning this into a Forum question where people can suggest numbers they think are important? (If you don't, I plan to steal your idea for my own karmic benefit.)aarongertler on What should the norms around privacy and evaluation in the EA community be?
I think it depends on how much information you have.
If the extent of your evaluation is a quick search for public info, and you don't find much, I think the responsible conclusion is "it's unclear what happened" rather than "something went wrong". I think this holds even for projects that obviously should have public outputs if they've gone well. If someone got a grant to publish a book, and there's no book, that might look like a failure -- but they also might have been diagnosed with cancer, or gotten a sudden offer for a promising job that left them with no time to write. (In the latter case, I'd hope they would give the grant back, but that's something a quick search probably wouldn't find.)
(That said, it still seems to good to describe the search you did, just so future evaluators have something more to work with.)
On the other hand, if you've spoken to the person who got the grant, and they showed you their very best results, and you're fairly sure you aren't missing any critical information, it seems fine to publish a negative evaluation in almost every case (I say "almost" because this is a complicated question and possible exceptions abound.)
Depending on the depth of your search and the nature of the projects (haven't read your post closely yet), I could see any of 1-5 being what I would do in your place.aarongertler on What should the norms around privacy and evaluation in the EA community be?
...your credibility will be reduced when the truth comes out, even if it doesn't have any real logical bearing on your conclusions.
I've had this happen to me before, and it was annoying...
...but I still think that it's appropriate for people to reduce their trust in my conclusions if I'm getting "irrelevant details" wrong. If I notice an author make errors that I happen to notice, I'm going to raise my estimate for how many errors they've made that I didn't notice, or wouldn't be capable of noticing. (If a statistics paper gets enough basic facts wrong, I'm going to be more suspicious of the math, even if I lack the skills to fact-check that part.)
This extends to the author's conclusion; the irrelevant details aren't discrediting, but they are credibility-reducing.
(For what it's worth, if someone finds that I've gotten several details wrong in something I've written, that's probably a sign that I wrote it too quickly, didn't check it with other people, or was in some other condition that also reduced the strength of my reasoning.)larks on What should the norms around privacy and evaluation in the EA community be?
Yup, I agree with that, and am typically happy to make such requested changes.peterslattery on Why scientific research is less effective in producing value than it could be: a mapping
Thank you so much for this excellent work. I am very interested in in this area. I look forward to seeing your suggested solutions! Please keep me in the loop.
Here is some related copy from a recent paper I wrote (not sure if useful, but I think it makes some issues very salient so may be useful)
"It can take up to 17 years for research evidence to disseminate into health care practice (Balas & Boren, 2000; Morris, Wooding, & Grant, 2011) and perhaps only 14 percent of available evidence enters daily clinical practice (Westfall, Mold, & Fagnan, 2007). For example, Antman, Lau, Kupelnick, Mosteller, & Chalmers (1992) found that it took 13 years after the publication of supportive evidence before medical experts began to recommend a new drug."
Let me know if you can't find the references.
The intro of this paper may also have useful copy.