Posts

Less often discussed EA emotional patterns 2022-06-27T19:33:32.245Z
How I failed to form views on AI safety 2022-04-17T11:05:23.920Z
Unsurprising things about the EA movement that surprised me 2022-03-30T17:08:56.082Z
[Creative Writing Contest] The Gifts 2021-10-27T19:28:19.476Z
[Creative Writing Contest] [Fiction] The Fey Deal 2021-10-08T03:06:50.639Z

Comments

Comment by Ada-Maaria Hyvärinen on EA for dumb people? · 2022-07-14T10:18:37.948Z · EA · GW

I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are "objectively" exceptionally impressive Luisa's article can also come across as intimidating: "if a person who achieved all of this still thinks they are not good enough, then what about me?"

I think Olivia's post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!

Comment by Ada-Maaria Hyvärinen on Stockholm Student Hackathon: Lessons for next time · 2022-07-11T08:50:41.372Z · EA · GW

thanks for the info! I didn't really get the part on ambitiousness, how is that connected to the amount of time participants want to spend on the event? (I can interpret this in either "they wouldn't do anything else anyway so they could as well be here the whole weekend" or "they don't want to commit to anything for longer than 1 day since they are not used to committing to things".)

Comment by Ada-Maaria Hyvärinen on Stockholm Student Hackathon: Lessons for next time · 2022-07-07T08:19:26.721Z · EA · GW

Thanks for this write-up! For me, a 10 hour hackathon sounds rather short, since with the lectures and evaluations it only leaves a few hours for the actual hacking, but I have only participated in hackathons where people actually programmed something, so maybe that makes the difference? Did the time feel short to you? Did you get any feedback on the event length from the participants or did somebody say they won't participate because time commitment seems too big (since you mention it is was a big time commitment from them)?

Comment by Ada-Maaria Hyvärinen on What is the top concept that all EAs should understand? · 2022-07-05T13:27:55.608Z · EA · GW

Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.

Comment by Ada-Maaria Hyvärinen on Less often discussed EA emotional patterns · 2022-06-30T10:20:36.746Z · EA · GW

I'm really glad this post was useful to you :)

Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you'd like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.

Comment by Ada-Maaria Hyvärinen on Announcing giveffektivt.dk · 2022-06-22T05:55:10.248Z · EA · GW

Your Norwegian example is really inspiring in this space!

I just want to point out that in some places a bank account number to donate to is not going to be enough - for example in Finland the regulations on collecting donations and handling donated money are quite strict, so better check your local requirements before starting to collect money.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-06-20T13:53:19.823Z · EA · GW

Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich enough? This then again seems to sound a bit like the concept of imagination and I am worried I am antropomorphising in a weird way.

Anyway, I still hold the intuition that generality is not necessarily the most important in thinking about future AI scenarios – this of course is an argument towards taking AI risk more seriously, because it should be more likely someone will build advanced narrow AI or advanced AGI than just advanced AGI.

I liked "AGI safety from first principles" but I would still be reluctant to discuss it with say, my colleagues from my day job, so I think I would need something even more grounded to current tech, but I do understand why people do not keep writing that kind of papers because it does probably not directly help solving alignment. 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-05-30T13:08:55.090Z · EA · GW

Yeah, I think we agree on this, I think I want to write out more later on what communication strategies might help people actually voice scepticsm/concerns even if they are afraid of meeting some standards on elaborateness. 

My mathematics example actually tried to be about this: in my university, the teachers tried to make us forget the teachers are more likely to be right, so that we would have to think about things on our own and voice scepticism even if we were objectively likely to be wrong. I remember another lecturer telling us: "if you finish an excercise and notice you did not use all the assuptions in your proof, you either did something wrong or you came up with a very important discovery". I liked how she stated that it was indeed possible that a person from our freshman group could make a novel discovery, however unlikely that was.

The point is that my lecturers tried to teach that there is not a certain level you have to acquire before your opinions start to matter: you might be right even if you are a total beginner and the person you disagree with has a lot of experience. 

This is something I would like to emphasize when doing EA community building myself, but it is not very easy. I've seen this when I've taught programming to kids. If a kid asks me if their program is "done" or "good", I'd say "you are the programmer, do you think your program does what it is supposed to do", but usually the kids think it is a trick question and I'm just withholding the correct answer for fun. Adults, too, do not always trust that I actually value their opinion.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-05-03T13:27:00.830Z · EA · GW

Hi Otto!

I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I'll have to think more about where this intuition of lack of data being a limit comes from, since it still feels relevant to me. Of course 100 years is a lot of time to gather data.

I'm not sure if imagination is the difference either. Maybe it is the belief in somebody actually implementing things that can be imagined. 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-27T11:49:44.386Z · EA · GW

Thanks! And thank you for the research pointers.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-27T11:47:59.083Z · EA · GW

This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data. 

There are some data that can be collected quite easily by the user, because the user can for example take pictures of themselves, measure their temperature etc. And then there are some things the user might not be able to collect data about, such as "is this joint moving normally". I think it is not so likely we will be able to gather meaningful data about things like "how does a persons joint move if they are healthy" unless doctors start wearing gloves that track the position of their hand while doing the examination and all this data is stored somewhere with the doctor's interpretation. 

To me it currently seems that we are collecting a lot of data about various things but there are still many things where there are no methods for collecting the relevant data, and the methods do not seem like they would start getting collected as a by-product of something (like in the case where you track what people by from online stores). Also, a lot of data is unorganized and missing labels and it can be hard to label after it has been collected.

I'm not sure if all of this was relevant or if I got side-tracked too much when thinking about a concrete example I can imagine.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-24T06:27:12.286Z · EA · GW

Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.

I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
 

  • I want to understand x-risks better, AI risk is considered important among people who worry about x-risk a lot, and because of my background I should be able to understand the argument for it (better than say, biorisk)
  • I find it confusing that I understanding the argument is so hard, and that makes me worried (like I explained in the sections "The fear of the answer" and "Friends and appreciation")
  • I find it very annoying when I don't understand why some people are convinced by something, especially if these people are with me in a movement that is important for us all
Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-24T05:41:38.183Z · EA · GW

Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.

Comment by Ada-Maaria Hyvärinen on I burnt out at EAG. Let's talk about it. · 2022-04-23T16:28:23.673Z · EA · GW

Hi, just wanted to drop in to say:

  • You had an experience that you describe as burn-out less than a week ago – it's totally ok not to be fine yet! It's good you feel better but take the time you need to recover properly. 
  • I don't know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as "just a kid". Doing your best to make the world a better place is hard for a person of any age.
  • The experience you had does not necessarily mean you would not be cut out for community building. You've now learned more of your boundaries and you might be more able to recognize red flags earlier in the future.

Good luck and I hope you learn something valuable about yourself from the ADHD assessment!
 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-23T11:47:57.265Z · EA · GW

Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-21T15:57:59.550Z · EA · GW

Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid.

You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my tasks was also to explain people why AI is not a silver bullet and that modern ML solutions require things like training data and interfaces in order to be created and integrated to systems. Obviously, if the task is to find out all things that can future AI systems might be able to do at some point, you should take a quite different attitude than when trying to estimate what you yourself can implement right now. This is why I try to take a less conservative approach than would come naturally to me, but I think it still comes across as pretty conservative compared to many AI safety folks.

I also find GPT-3 fascinating but I think the feeling I get from it is not "wow, this thing seems actually intelligent" but rather "wow, statistics can really encompass so many different properties of language". I love language so it makes me happy.  But to me, it seems that GPT-3 is ultimately a cool showcase of the current data-centered ML approaches ("take a model that is based on a relatively non-complex idea[1], pour a huge amount of data into it, use model"). I don't see it as a direct stepping stone to science-automating AI, because it is my intuition that "doing science well" is not that well encompassed in the available training data. (I should probably reflect more on what the concrete difference is.)

Importantly, this does not mean I believe there can be no risks (or benefits!) from large language models, and models that will be developed in the near future.

I think it is very hard to be aware of your intuitions, incorporate new valid information to your world view and communicate with others at the same time. But I agree that for everyone it is better if we create better opportunities to do that, because otherwise we will lose information.
 

  1. ^

    not to say non-complexity would make the model somehow insignificant, quite the opposite, it is fascinating what attention mechanisms accomplish not only in NLP but on other domains as well

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-21T14:56:53.282Z · EA · GW

Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.)

I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of course it is important to avoid all x-risk, so x-risk from AGI specifically is also worth talking about.

I don't enjoy putting numbers to estimates but I understand why it can be a good idea so I will try. At least then I can later see if I have changed my mind and by how much. I would give quite low probability to 1), perhaps 1%? (I know this is lower than average estimates by AI researchers.) I think 2) on the other hand is very likely, maybe 99%, by the assumption that there can be enough differences between implement AGIs to make a team of AGIs surpass a team of humans by for example more efficient communication (basically what Russell says in Human Compatible on this seems credible to me). Note that even if this would be superhuman intelligence it could still be more stupid than some superintelligence scenarios. I would give a much lower probability to superintelligence like Bostrom describes it. 3) is hard to estimate without knowing much about the type of superintelligence, but I would spontanously say something high, like 80%? So because of the low probability on 1) my concatenated estimate is still significantly lower than yours.

I definitely would love to read more research on this as well.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-21T09:33:51.310Z · EA · GW

I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme. 

I think the 1.5h discussion session between 5 people who have read 5 texts  does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually reading the material, but they by nature are not that good for having in-depth debates. I think that's ok, but just to clarify why I think it is normal I probably did not mention most of the things I described on this post during the discussion sessions.

But there is an additional reason that I think is more important to me, which is differentiating between performing skepticism and actually voicing true opinions. It is not possible for my facilitator to notice which one I am doing because they don't know me, and performing skepticism (in order to conform to the perceived standard of "you have to think about all of this critically and by your own, and you will probably arrive to similar conclusions than others in this field") looks the same as actually raising the confusions you have. This is why I thought I can convey this failure mode to others by comparing to inner misalignment :) 

When I was a Math freshman my professor told us he always encourages people to ask questions during lectures. Often, it had happened that he'd explained a concept and nobody would ask anything. He'd check what the students understood, and it would turn out they did not grasp the concept. When asking why nobody asked anything, the students would say that they did not understand enough to ask a good question. To avoid this dynamic, he told us that "I did not understand anything" counts as a valid question on his lectures. It helped somewhat but at least I still often stayed silent instead of raising my hand and saying "I did not understand anything".

I feel like the same dynamic can easily happen when discussing AI safety (or any difficult EA concept, really). If people are encouraged to raise questions and concerns they might only raise the "good" ones, and stay silent if they feel like they just did not understand the concepts well enough (like I did in my avoidance strategy 1).

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-21T09:01:08.555Z · EA · GW

Like I said it is based on my gut feeling, but fairly sure.

Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.

Also, do you happen to know why "prosaic" practices are called "prosaic"? I have never understood the connection to the dictionary definition of "prosaic".

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T19:45:27.622Z · EA · GW

I'm still quite uncertain on my beliefs but I don't think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsafe systems, I don't know how much this happens but I would not find it very surprising). 

I wish I had a better understanding on how x-risk probabilities are estimated (as I said I will try to look into that) but I don't directly understand why x-risk from AI would be a lot more probable than, say, biorisk (that I don't understand in detail at all). 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T19:17:10.457Z · EA · GW

That's right, thanks again for answering my question back then! 

Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons - I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to hear you also had a good opinion on it! If I would have known both your academic AI experience and that you liked Superintelligence I could have made the number to 4 (unless you think Superintelligence did not really influence you, then it would be 3 out of 4).

You were the only person who answered my PM but stated they got into AI safety before getting to DS/ML. One person did not answer, and the other 3 that answered stated they got into DS/ML before AI safety. I guess there are more than 6 people with some DS/ML background on the course channel but also know not everyone introduced themselves, so the sample size is very anecdotal anyway.

I also used the Slack to ask for recommendations of blog posts or similar stories on how people with DS/ML backgrounds got into AI safety. Aside of recommendations on who to talk on the Slack, I got pointers to Stuart Russell's interview on Sam Harris' podcast and a Yudkowsky post

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T14:22:40.843Z · EA · GW

Thanks for giving me permission, I guess can use this if I need ever the opinion of "the EA community" ;)

However, I don't think I'm ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T13:55:25.950Z · EA · GW

Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don't know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen  most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think "minimalistic" style was nice.)

It would be nice to call and discuss if you are interested.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T12:56:33.247Z · EA · GW

Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.

Curious to read any object-level response if you feel like writing one! If I end up writing any "Intro to AI Safety" thing it will be in Finnish so I'm not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person). 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T06:03:29.584Z · EA · GW

To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking "hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good". So they  got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more. 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T05:48:41.383Z · EA · GW

I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said. 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-20T05:41:04.432Z · EA · GW

Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-17T12:47:16.484Z · EA · GW

Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time). 

Comment by Ada-Maaria Hyvärinen on How I failed to form views on AI safety · 2022-04-17T12:37:24.444Z · EA · GW

thanks Aayush! Edited the sentence to be hopefully more clear now :)

Comment by Ada-Maaria Hyvärinen on Unsurprising things about the EA movement that surprised me · 2022-04-01T05:46:50.552Z · EA · GW

With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA.  When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can't remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the "phil" stood for "philosophy". 

Comment by Ada-Maaria Hyvärinen on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-14T17:05:49.732Z · EA · GW

glad to hear you like it! :)

Comment by Ada-Maaria Hyvärinen on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-13T15:53:04.496Z · EA · GW

This one is nice as well! 

Personally I like the method of embedding the link in the story, but since a many in my test audience considered it off-putting and too advertisement-like I thought it it better to trust their feedback, since I obviously personally already agree with the thought I'm trying to convey with my text. But like I said I'm not certain what the best solution is, probably there is no perfect one.

Comment by Ada-Maaria Hyvärinen on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-11T19:10:52.297Z · EA · GW

I tried out a couple of different ones and iterated based on feedback. 

One ending I considered would have been just leaving out the last paragraph and linking to GiveWell like this:

“Besides,” his best friend said. “If you actually want to save a life for 5000 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”

“What do you mean?” he asked, now more confused than ever.


I also considered embedding the link explicitly in the story like this:
 

“Besides,” his best friend said. “If you actually want to save a life for 500 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”

“What do you mean?” he asked, now more confused than ever.

"I'll send you a link", she said.

And the link she send him was this: https://www.givewell.org/ 

However, some of my testers found that this also broke the flow and that moving the link "outside" the story gave a less advertisement-like feeling.

And I also tried an ending that would wrap up the story more nicely (at this point the whole story was around 40% longer and not that well-edited in general):

“You know there are organizations that save people from dying of preventable illness and poverty,” she said. “The best ones can actually save a life for around that much, maybe even less.”

“But how do I know what those organizations are and how much they actually need to save somebody from dying?” he asked. “That sounds even more complicated than coming up with questions about side effects to the fey.”

“You don’t have to do that all by yourself,” she said. “There are people who are working on this stuff. You can see if you agree with their reasoning and conclusions, and then make your own decisions.”

This could be something. For a split second, he wished she wouldn’t have told him that. If what she said was true, he would have to make a choice, again. But it was better to know than to not know. And suddenly the thought of actually being able to save a person who would otherwise die was so overwhelming he had a hard time wrapping his head around it.

“I guess I don’t really like making decisions,” he said.

“I feel you,” she said. “But if you don’t make a choice, that’s actually a decision, too. It just means you chose to do nothing.”

“Yeah,” he said. “I’ve noticed.” 

The fey were still in the woods, and would be in the woods, maybe forever. It didn’t matter. Anyway, he had to choose. But at least he could find out what his options were.

This longer ending was most liked by readers that were already quite familiar with EA, so I decided to not go for it, since I wanted to write for people who have not thought and discussed about EA that much yet. But of course, my pool of proof-readers was not that big and everyone was at least somewhat familiar with EA, even if not involved in the movement. It would be interesting to get feedback from total newbies.

Comment by Ada-Maaria Hyvärinen on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-09T06:14:04.847Z · EA · GW

Don't be sorry! Feedback on language and grammar is very useful to me, since I usually write in Finnish. (This is probably the first time since middle school that I've written a piece of fiction in English.) 

Apparently the punctuation slightly depends on whether you are using British or American English and whether the work is fiction or non-fiction (https://en.wikipedia.org/wiki/Quotation_marks_in_English#Order_of_punctuation ). Since this is fiction, you are in any case totally right about the commas going inside the quotes, and I will edit accordingly. Thanks for pointing this out!

Comment by Ada-Maaria Hyvärinen on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-08T18:33:24.274Z · EA · GW

Thanks for the feedback! Deciding how to end the story was definitely the hardest part in writing this. Pulling the reader out of the fantasy was a deliberate choice, but that does not mean it was necessarily the best one – I did some A/B testing on my proof reading audience but I have to admit my sample size was not that big.  Glad you liked it in general anyway :)