Open Channels FM
Open Channels FM
Navigating AI Integration and Ethics in Open Source Communities
Loading
/

In this episode, host Derek Hanson reconnects with his grad school friend, Kimberly Pace Becker, a specialist in applied linguistics and co-host of the “Women Talking About AI” podcast. Together, they talk about the intersection of language, technology, and AI while drawing on their unique backgrounds in building digital tools and analyzing the power of words.

Kimberly shares her deep insights from academia and her experience founding an edtech startup, delving into how large language models work, the challenges of AI ethics, and the impact of AI integration in open source projects like WordPress. Tune in as they discuss the promises and pitfalls of democratizing technology, the importance of diverse voices in AI development, and how we can build more ethical, sustainable, and inclusive digital futures.

Logo of Omnisend featuring a stylized 'i' icon and the brand name in lowercase letters.

The best time to migrate is before you’re under pressure. Omnisend moves everything essential for you now, so you’re fully ready when you plan for that large campaign. Use the code OpenChannels and get 30% off your first 3 months of any paid plan.


Logo of Woo in purple color with a playful font.

If you build stores for clients, WooCommerce gives you the flexibility to create exactly what merchants need. Customize workflows, extend with thousands of integrations, and scale without switching platforms. Check it out at WooCommerce.com.

Takeaways

Interdisciplinary Collaboration is Essential: Kimberly Pace Becker and Derek Hanson emphasized the value of combining research, linguistic analysis, and technical skills when approaching AI and open source projects, illustrating how interdisciplinary perspectives yield better outcomes 00:17 to 01:15.

Understanding AI’s Limits and Confidence: A key concern highlighted was the tendency for AI outputs to sound overconfident, often presenting incorrect information with undue certainty, which can mislead users and foster misinformation if not carefully managed 16:24 to 18:13.

Ethics and Guardrails are Crucial with AI Integration: When embedding AI into platforms like WordPress, both the AI’s guardrails and explicit end-user guidance are vital to prevent misuse, over-reliance, and hallucinated outputs, especially with features like auto-generated citations or code 13:31 to 15:37.

Auditability Should Include Outputs, Not Just Code: Open source communities excel at making code transparent and auditable but need to develop similar practices for auditing AI-generated content and its impact on diverse users 22:34 to 23:59.

Diversity and Inclusion in AI Development: Kimberly Pace Becker stressed the importance of including a broad range of voices including women, people of color, humanities experts, seniors, and people from various backgrounds in AI decision-making to combat bias and better assess impacts 27:15 to 30:15.

Embracing Friction in Responsible Development: Contrary to tech culture’s usual pursuit of removing user friction, deliberately inserting points for pause and reflection leads to better long-term outcomes especially when dealing with uncertainty or ethical considerations. 42:29 to 43:35.

Be Willing to Sit with Discomfort: Building and integrating AI responsibly often involves confronting discomfort, whether due to ethical dilemmas, uncertainty in outputs, or the need to change ingrained workflows. This willingness is presented as a vital aspect of growth and success 44:35.

Prioritize Incremental, Sustainable Progress: Both incremental and sustainable change are necessary when introducing AI features, avoiding the temptation for flashy, rapid updates that outpace users’ understanding or needs 32:25 to 33:56.

Ask Critical Questions Before Shipping AI Features: Before launching any AI-driven product or feature, creators should ask, “What does this tool do with uncertainty?” If the answer isn’t clear or satisfactory, the product may need more work before release 39:00.

Open Source as a Model for Thoughtful AI Adoption: Open source communities, given their transparency and collaborative ethos, are uniquely positioned to lead in thoughtful, ethical AI adoption, but only if they intentionally apply their core values to the outputs and impacts of AI, not just its codebase 40:02 to 41:21.

Questions Answered in This Episode

Q: How can integrating AI into open source platforms like WordPress impact the user experience?

A: Integrating AI into open source platforms can democratize access to advanced tools and expertise that were previously reserved for those with more resources or technical know-how. However, as discussed by Kimberly Pace Becker, it also raises concerns about overconfidence in AI-generated content and the need to address uncertainty, accuracy, and transparency for all users, not just experts (16:24).

Q: What are the main ethical concerns when using AI for academic writing or feedback?

A: Kimberly Pace Becker highlighted that while AI can provide valuable feedback and pattern recognition, it often struggles with uncertainty and can be overly confident, potentially leading to misinformation or issues with plagiarism (06:39). Ethical use requires clear guardrails, user education about AI’s limitations, and careful integration to avoid harmful academic practices.

Q: Who should be involved in making decisions about AI implementation in open source projects?

A: The conversation emphasized the importance of involving a diverse group of contributors, not just developers, but also linguists, rhetoricians, social scientists, and individuals from different age groups and backgrounds (29:56). This diversity ensures that multiple perspectives, especially regarding ethics, usability, and societal impact, are considered in AI decisions (30:05).

Q: Why is uncertainty in AI outputs a problem, and how should developers address it?

A: AI models tend to strip away uncertainty, generating confidently worded responses even when the information may not be fully accurate (16:24). Developers should build systems that either surface or appropriately handle uncertainty to avoid misleading users and to foster a more nuanced and responsible user experience (39:20).

Q: What specific “blind spots” do open source communities need to watch for when adopting AI?

A: Kimberly Pace Becker cautioned that technical contributors may overlook the auditability of human-facing outputs and the biases embedded in AI training data and annotation processes (22:35). She stressed that transparency at the infrastructure level doesn’t always guarantee ethical or accurate user impacts (23:34).

Q: How can non-developers contribute to open source projects, especially with the rise of AI tools?

A: The episode pointed out that AI is lowering barriers for non-developers to contribute by enabling them to submit bug reports, feature requests, and test interfaces without extensive coding skills (26:09). Bringing in users with varied backgrounds and perspectives enriches the project and better addresses the needs and concerns of a wider audience.

Q: What role does friction or discomfort play in responsible AI development?

A: Kimberly Pace Becker argued that some friction, such as slowing down to ask hard questions or requiring users to consider context, is necessary for rigor and care in AI implementation (42:46). Avoiding discomfort altogether can lead to shallow or careless development practices that ignore potential risks.

Q: How can open source communities ensure their AI features benefit a wide range of users ethically and sustainably?

A: Kimberly Pace Becker suggested prioritizing incremental, sustainable development and regularly questioning who benefits or is potentially disadvantaged by new AI features (32:25). Building in a process for surfacing uncertainty, encouraging interdisciplinary collaboration, and keeping users’ real needs in view are key to ethical and sustainable progress.

Mentioned Links and Resources

Timestamped Overview

  • 00:00 Understanding corpus linguistics basics
  • 03:30 Concerns about generative AI plagiarism
  • 07:19 Starting an online writing company
  • 09:45 Exploring AI in consultations
  • 14:29 Integrating AI with WordPress
  • 18:14 Experimenting with AI in work projects
  • 22:15 Open source code auditability
  • 23:24 Discussing machine learning bias
  • 27:15 Discussing big tech departures
  • 32:35 Importance of sustainable growth
  • 36:06 Concerns about tech regulation
  • 39:51 Discussing WordPress and AI integration
  • 41:24 Defending your work verbally
  • 46:14 Encouraging listeners to subscribe
Episode Transcript

Derek Hanson:
Well, welcome everybody back to another episode of Open Makers on the Open Channels FM podcast network. My guest today is someone I actually know from grad school. We’re both from Iowa State. We were there at the same time and we came together at language from completely different directions. Directions. I was on the building side and really leaning into WordPress, visual rhetoric, how technology changes how we communicate. So getting into like the practical side of things. And Kimberly Pace Becker, who’s joining us today, was on the research side. Applied linguistics, corpus analysis, how meaning and power actually work in text. Well, it turns out we’re both, without knowing it, studying the foundations of what large language models do today. So Kim now co hosts Women Talking About AI Podcast, which is a fantastic podcast. So if you like AI, go jump and subscribe to that right now. And AI in ways most builders haven’t caught up to yet is what they’re focusing on. On that. So we’re glad to welcome Kimberly today to learn how AI can impact our open source project. Welcome Kim.

Kimberly Pace Becker:
Thanks. I’m glad to be here. I usually run into you. I mean I also run into you besides seeing you at Iowa State on the flag football field out at Upward. I’m mean that’s usually I run into you like grocery shopping or kids sports events. But our kids are older now so we don’t.

Derek Hanson:
Our kids are all older. Yep. I think I also run into, usually around like the consignment shops too because our kids are older. We’re consigning all that old clothing from when our kids were young. So yeah, Ames is a small town, so it’s, it’s, it’s great to be able to, to bump into each other often. So Kimberly, I wanted to jump in and say, so you spent decades studying how language works, the patterns and the power dynamics and the way meaning gets made in real communication. So like when things like ChatGPT arrived, you said you recognized the patterns immediately. What, what did you think you see that others didn’t in this initial like wave of AI to like consumers? I guess.

Kimberly Pace Becker:
Yeah, I don’t, I don’t know that the first thing I noticed was the patterns. I think what I saw first was a couple things that just came from my training in corpus linguistics. A corpus is just a big database in case you don’t know, you know, your listeners don’t Know what that is? It’s a fancy word for like a big body of text that has been gathered from wherever. And so a large language model on the back end is really just a big database of, of writing that was scraped from the web and open source and not open source and copyrighted material and all kinds of things. We don’t have to get into that. But. And so the first thing I thought was, in terms of how you make a corpus from the academic perspective is that you want it to represent whatever sample of language you are examining. And so if you have such a huge sample that it’s just like the whole functional web and you know, throw in some copyrighted materials, like what, what are you really sampling? And then that’s going to be reflected back to you, of course, in the output. So that was like kind of my first question, like, oh my goodness, are we just going to get a bunch of like Reddit repurposed Reddit material or you know, blogs or. So I was thinking about it just like on the back end, like, what does it look like? And then the other thing that I quickly noticed was people immediately started looking at it from a, from the perspective of like the output in turn, like it’s writing for you, you know, like it, it’s cheating. I mean, that was kind of the first thread that we heard in the education space. It’s like, oh, it’s, it’s like a new form of plagiarism or whatever. But I was thinking, no, like it does so much more than generate it. Generative AI, I feel like, always does. Did it a disservice as a name because it, it. I knew that it could do feedback. That was like my first research project was like, if you gave it a paper that you knew was an A graded paper, would it also give an A grade? I wasn’t looking for it to give me actual, you know, language that I might use as my own, but just to recognize patterns, because that’s all it does. It’s just a pattern recognition machine. So I was like, if it can recognize patterns and output something that seems like human language, then it can also recognize patterns in our human language and give us feedback on that. And so that is how I ended up founding an edtech company was, it was a feedback company. It wasn’t about writing for anybody. It was about generating feedback. So I think that was the. Those were my first two big aha moments.

Derek Hanson:
Kimberly, you lived on a couple different sides of AI then with academia and then moxie your edtech startup. How have Each of those experiences shaped how you think about AI ethics as you were talking about plagiarism, like that’s what I’m thinking about a lot. Like ethics and AI.

Kimberly Pace Becker:
Yeah. So I was a lecturer in the English department at Iowa State when ChatGPT was released. I had just graduated with my PhD and gotten hired there. And I was mostly teaching business communications and freshman computer and like one graduate student research methods class. And I loved the grad students, not so much the undergrad, but I could see even, even just in the first year, I was like, I immediately had my students playing with it. They were like, what is this thing? Everyone’s saying not to use it, it’s cheating. Not the grad students, they weren’t interested in it at that point. It was like GPT3 and it certainly wasn’t capable of PhD level communication. So they weren’t, you know, curious or much at all. But the undergrads were really curious about it and they were shocked that I was having them, you know, work with it. But Iowa State is a very tech forward English department. I mean my degree is called, you know, Linguistics and Technology and there’s, I mean yours was, you were in the techcom department, essentially.

Derek Hanson:
Yeah, basically. And that’s what got me where I am now with WordPress. Like that is like the bedrock of my journey.

Kimberly Pace Becker:
Yeah, I mean, exactly. And the reason I chose Iowa State was because I was older, coming back to graduate school. I’d already had a career teaching at a community college. I had, you know, two little kids. I knew that I would be in my 40s when I graduated. So I wanted the tech to be part of because I thought it might give me an edge. And it totally has. I mean, I, I don’t know that I’m, you know, raking in the big bucks because of it, but it’s given me opportunities that I never would have had if I hadn’t had that anyway, so, so yeah, that happened. And, and at the time I was working on the side as a dissertation and graduate student research writing coach for a private company and had really hit it off with the owner of that company, Jessica Parker, who is now the co host of Women Talking about AI. And she and I set out to a. An online graduate student writing essentially company that, that had workshops, not just one on one consultations, which are really expensive because you need to hire a PhD coach and then you have to pay them PhD level, you know, so we thought, well, we could make this, we could scale this easier if we had really small group meetings. Where each person paid, I don’t know, something reasonable, like $25 an hour in a four person group, you know, then the coach is making a hundred or whatever. This is kind of roughly what we were thinking. And then we started getting curious about how AI might help them to give feedback. Because one of the problems in a lot of graduate programs is there’s not enough. Well, the professors are. It’s a very complicated and very long draft that you’re giving to a teacher. It’s not like an undergrad who’s writing a couple pages and getting feedback on that. It’s. It’s maybe, you know, a chapter of a dissertation. Yeah, it could be 25, 30 pages. A journal article is typically 30 pages in, in our field. And so that’s no small thing. And so we thought, well, if it could just give some feedback even if the feedback is not great, like if it’s just better than the average, could that work? And so we started playing around with that and our clients loved it because then it was like we weren’t just getting chatgpt to give, you know, here’s this paper, give us some feedback. It was like using Swale’s move step framework, please, you know, use that to give feedback on this introduction. Like, is this person accomplishing the communicative goals and using the strategies that a high level research writer would use? So we were like feeding in frameworks and using that as feedback. And they loved it. They were like, this is like having you here with us at two in the morning when we’re writing.

Derek Hanson:
Yeah.

Kimberly Pace Becker:
So when after that we were like, we’ve got to lean into this. This is the future. We aren’t going to be paying humans very long to do this. And I think we were really early. I think one of our problems was we were really early. We could see that. Well, she could see as a business owner that like, you know, what’s the future of a one on one consultation? If a machine could do or even approximate that conversation? This is the future. And so she was sort of thinking, okay, well how do we future proof a company? She’s got the entrepreneurial business brain, but also is an academic and I was the linguist who was like, yeah, we could absolutely bake in some linguistic frameworks behind this and you know, give it some information about genre theory or you know, corpus linguistics, principles, collocations, like what words come together in academia versus in other things to avoid like all these things. And it was just a really great synergy. And so we adopted a no code platform that at the time, well, it has moved on now to agentic AI, but at the time it basically allowed you to build in a prompt that the user would never see. And so there was like the ChatGPT system prompt, or we were also using CLAUDE at the time, the CLAUDE system prompt. But then we were embedding another prompt that the user didn’t see that said things like, don’t write for the student, only give them feedback from this framework. Never, you know, never try to give them disciplinary content. Just look at the writing from the perspective of xyz. And so that was kind of like the idea behind the company and it. And it did great for a little while until the frontier models like Chat, GPT, Claude, Gemini got so good that without our expertise, they were able to give that really good feedback.

Derek Hanson:
Yeah.

Kimberly Pace Becker:
And then kind of like.

Derek Hanson:
Yeah, you kind of touched on it a little bit. You, you mentioned agentic. Right. Like anybody who’s so like, you know, the. This audience predominantly are creators and developers and we’re all very familiar now with writing and using and sharing skills. Right. So it almost sounds like what you were early on was creating the skills for these tools that were running in the background. So that, that’s really cool to hear. Like essentially what has proven out to be working now that’s what you were doing early on with that really high level writing of, of academic level writing.

Kimberly Pace Becker:
Yeah, because in the background, what we had was an agent that the user didn’t see. The user still was using a chatbot interface, but on the background of that we had essentially an agent that was functioning behind the scenes that had specific guardrails, do’s and don’ts. And then we even had some rag where we would give it, you know, a certain amount of information from what we were trying to do when we ended up closing was connect like a research database. Because, you know, one of the things all academics know is it, it will hallucinate citations. Fake. Yeah, fake sources. And so we were having a real problem with that. Even though every single starting prompt we gave our users was like, do not use Moxie to cite sources. Moxie does not cite sources. Number one. Question. Two questions we would get. Number one, does Moxie evade AI detectors? And number two, why is Moxie hallucinating sources? And we feel like. Well, because all AIs hallucinate sources unless they’re connected to a database. So that.

Derek Hanson:
Yeah, that’s some of those, like kind of like guardrails and that, that idea of like AI and ethics. Right. And like making sure so like you and Jessica were building something with a lot of those things in mind, right. To like not only give the AI like guardrails and instructions, but even like the end user like here, like you need to make sure you don’t trust it in these kinds of ways, which I think is really, really important. If you’re building something that, that makes a lot of sense. So yeah, so I touched on a little bit like the idea of like, like open source. So our audience lives and breathes open source. And so the idea is that everything is built on democratizing publishing, giving everyone access to tools that were once reserved for like the technology privileged. So on the surface, AI seems like a natural extension of that mission because like we don’t have direct one to one access, like you said, with like an academic professor to give us feedback on, you know, our papers or our writing. So AI like building on that could be part of like democratizing that expertise. Now what would open source communities learn from your experience and thinking a little bit. So WordPress, you might not like follow this, but we’re getting ready to release 7.0 and a very like last minute feature that was added in are AI connectors. Anybody developing a plugin will just be baked into the core platform. And WordPress powers 43% of the Internet. So that’s like a large like stake of like the Internet that is gonna like make it possible to just directly integrate AI so as like an outsider to that. And from your experience, like what is your honest read on that? Like what’s the move most likely to possibly get wrong if we’re not leaning into and learning from experiences that somebody like on your end might have gone through?

Kimberly Pace Becker:
Yeah, it’s a really good question. I think the main problem with embedding AI from, from a linguist, from a linguist’s perspective, maybe not necessarily from a former AI business owner, is that it is so confident sounding and when you extrude text that has all the doubt stripped out of it, which a, a human would naturally put in there. Well, mo most humans, I mean if you’ve ever taught freshman composition writers, you know, they, they’re very sure about everything. They’re very certain.

Derek Hanson:
Yep.

Kimberly Pace Becker:
You know, that’s a developmental thing for people who are that age. They’re very certain. They haven’t had enough life experience to know, oh, that really strong intuition I had was actually totally wrong. But I, you know, I have a concern about how the, the doubt and the certainty really get totally stripped out. And so it’s like, I don’t, I mean I guess the best phrase I could give it is like normalized overconfidence and again like going back to the corpus that it’s trained on. It’s like blogs, corporate speak, you know, Reddit posts, social media posts and people position themselves very strongly, whereas experts, you know, not your everyday layperson, but an expert will tend to say, well maybe or it’s possible that this might happen and you know, kind of where they stand on that. But a machine, these machines are just like extruding this very confident sounding text. Like the certainty is gone. And so if they, if they’re not surfacing that to a user who, I mean, I’m not sure what kinds of things you’re, you would even be outputting. But any language that is stripped of uncertainty is, is sure to eventually lead to misinformation because certainty has to be baked into everything we do or uncertainty. It’s on a, you know, on the spectrum. But that’s what I’m mostly concerned about is how sure they sound. Confidently wrong they sound. And people who don’t have a critical eye see their accuracy, see their fluency as the be all, end all. And it’s really not. The world just isn’t black and white like that.

Derek Hanson:
Yeah, I see that a little bit. Like in my own experience, right, like, so like totally have leaned into AI like obviously like in my day to day work and things I’m like experimenting with on the side using it for content creation and like for building things. And for instance, like if I’m building a theme AI cloud code, totally sounds confident in how it like created or implemented something. But even though I’m not a developer, I have enough level of expertise and knowledge about how the, the thing should work that I can see when the confidence is like so sure. Like, yeah, it should have this, you know, you know, put into this, you know, theme.JSON file. And I’m looking at it, I’m like, that’s not the end user experience. Like this is incorrect no matter how sure you sound. So there’s like sort of like the content creation and then like the, the code that’s developing it. Like I know recently Claude code has, you know, stated that 90% of their code to build the AI platform Claude is written by, you know, Claude itself. And to me like that could be kind of scary with all the hallucinations that you were mentioning, but if the right people are doing it, then maybe this is a way to transform how we can work day to day.

Kimberly Pace Becker:
I think it’s less dangerous with code because when you’re, when you’re writing code, like you’ll see immediately if it doesn’t work right, like it’ll either do the thing you need it to do or not. There’s like this very public measure of success. But if you’re just giving like I’m thinking about read me files, like it when you’re writing a read me file, I.e. directions to a human who’s going to be implementing something. So those I think would need to be especially carefully edited to make sure they’re not overconfident or they’re not leaving out really important details that a user might need to, you know, to move forward on implementing something beyond just the code itself.

Derek Hanson:
Yeah, absolutely.

Derek Hanson:
AI is really good at coding things. And open source communities for software tend to also be very good at building things. And what, what might be some of the blind spots? Like if, if we’re kind of really leaning into, from like a developer perspective and building software with AI, what do you think some of the blind spots might be and how, maybe not even necessarily how we’re using AI to build the software itself. Because you know, you’re right, that might be really solid because of the historical documentation and knowledge of the experts building the tools. But integrating something like AI, like for end users, like what might be some blind spots that could potentially trip up the experience for people with the software?

Kimberly Pace Becker:
Well, when I think about the open source community, I think what they’re really good at is like auditability of code, you know, like are you able to validate what happened in the code, make it public and audit it and, and then it’s like this crowdsourced ability to, you know, make sure that it’s Working, right. And then it’s ethical. And I think probably what, what they may be less practiced at. And I’m no expert, but this is just me kind of riffing is maybe auditability of the outputs. And so, so the human facing, you know, not just the thing the machine needs to read and do, but the thing that a human needs to read and do. Because, you know, you can publish like all your infrastructure details, your settings, and if the user doesn’t know that these are inherently machines with bias baked in and there’s nothing we can do about that, we cannot take that corpus that these huge companies built and fix it. We can’t make it representative. We can’t scrape out the bias because humans are biased. I mean, these machines learned bias from us. And of course bias is relative, like, who, whose bias is it? And whatever. But I just keep coming back to like, rhetoric. And this will appeal to you because that’s. I know you’ve studied that language. Yeah, like, yeah, like assuming that transparency at the level of infrastructure is equal to transparency at the impact level, at the point where the user gets involved, like, who is reading what this produces and in what context. And here’s the rhetoric, right? Like author, audience, purpose, like, what are their assumptions, what are they coming to the table with? Those questions require a really different kind of contribution or contributor than the ones who showed up to like, just fix the bug. So I’m always asking questions like, okay, who is benefiting from this and who is this disabling? Or where has this come from and what were the biases? Because it’s not just the biases baked into like whatever data and text they scraped, but also annotators. So annotating that code, the human reinforcement learning part of machine learning, which I didn’t know anything about at all until I just am like fascinated by these language machines and I started looking at what humans are doing on the back end. Well, now that we’re getting models that are really like discipline specific or specialized, you know, who, who is on the. Who’s the human on the back end doing that data? Because probably it’s someone that’s getting paid pennies for by the word. Or like Mechanical Turk. Amazon’s got, you know, Mechanical Turk, where you hire people to annotate data. Right. These companies scale AI and now it’s happening in academic spaces with like outlier. That company, you know, they’re not, they’re not paying quality wages for the kind of work that needs to be done. And that is going to be reflected in the lack of Quality in the output, you know, not just from what it started with but what the annotators did. I know that’s pretty technical but I think a little bit of literacy around that is helpful for people.

Derek Hanson:
Yeah, a hundred percent. And as you started to like, you know, reference contributors. So the WordPress WordPress contributor community, like has always skewed heavily towards developers. But now with AI I think that barrier of entry is lowering quite a bit to where a non developer can, with you know, the assistance of AI contribute. But it’s always been open for anybody to contribute in any manner through like you said, the rhetorical lens. Like I can be an end user and still contribute to the project just by like submitting bug issues or feature requests or like testing things. So that’s always really there. And, but if we’re thinking about AI, like I don’t know what the, like the makeup is of the people that are like introducing AI into the software itself. Like who’s in the room? You’re thinking about voices, like who should be in the room for AI decisions. Like not just how do we build it because that’s kind of like where things are at. Like we need to build this thing. So that’s a very developer focused mindset. But what type of people should be in the room for those conversations asking the questions like should we or on whose terms? Like you were saying, like talking towards audience like what, like who might be missing in some of these conversations as we like try to get this ramped up and into the project.

Kimberly Pace Becker:
Well I think you can look at like the kinds of people who have, who have quit their big tech jobs recently from Google or you know, walked away from Anthropic. And look at, you know, I’m thinking specifically about like Tim Nickrew who left Google. She published a very famous paper with Emily Bender who’s a linguist. It’s that stochastic parrot paper. I don’t know if you’ve heard of it, but it’s, I haven’t. It’s a, it’s about how language models mirror biases. And, and they were very concerned from the beginning. This was before any of the rest of us knew about a language model. They, they were working with Google and they were, they were essentially whistleblowers. They were saying this is moving too fast, this is going to pollute the, the information, the world of information in a way that we cannot backtrack. This is going to make the Internet worse than it already is. And, and they quit. And, and Tim, Nick Gebrew quit. She Walked away. So, I mean, I. There a lot of. I think, first of all, there aren’t that many women in tech, so you’re not going to have as many women in the room. And I don’t know how to fix that. You know, I mean, I just. I don’t. I don’t know how to fix that problem. Certainly people of color. But, you know, I’m. I work with seniors now and in a nonprofit that caters to senior adults, and they are at high risk of being scammed, you know, and so I think. I think a range of ages need to be in the room. My teenager the other day, we were. He was trying to find a reel to show me on Instagram. He’s like, mom, you’ll think this is funny. And you know how you’re just trying to find it and you’re scrolling through again, and finally he passed one. And I was like, wait, I want to see that. And he goes, no, that’s just AI. And I said, how do you know? I mean, we had seen 1.5 seconds of it. And he goes, I can just tell. And I was like, how do you know? I don’t understand. It was a picture of a man picking up a woman and throwing her down a bowling alley lane, which could happen. I mean, it wasn’t. I mean, she didn’t roll like a ball. She just fell. She just. He dumped her on the ground. She rolled over into the gutter. And he immediately knew it was. And I was like, but how did you know? And he couldn’t put his finger on it. But I think it’s not just the obvious things, like, you know, like women and people of color. Like, I’m thinking age and is important, too. And. And people from a range of different backgrounds. Because. Because mostly it’s educated people in the STEM background. So we need humanities people in there. We need linguists, rhetoricians. We need techcom folks, sociologists, philosophers, historian, you know, on and on and on. I. And how do you get those people in the room? I don’t. I don’t know.

Derek Hanson:
Maybe, like, it starts with, like, you being one of those people, right? Like, you’re definitely one of those people. Like, in a sense, like, I kind of started as one of those people with, you know, with rhetoric and writing as my background. So if you have that kind of, like, baked into your personality, like, you bring that to the table. But yeah, like, it’s. There should be more of an intermixing of disciplines, I guess, right? Like, in a. In a lot of ways. And I Know, like, like a university, like in Iowa State. And you know, there’s lots of other universities in that, like, stammer area that have always been really good about, you know, extension and outreach and, you know, working within like the local communities. And yeah, maybe it’s a matter of just like being that one person, like within one of those communities to like, like champion the idea of like, hey, I need to go out and join this conversation. The thing that we are using, we should have a voice in that to ensure that it’s, you know, being built and, and shipped like responsibly and ethically to like the, you know, the greatest number of people and all those different, you know, demographics, which is really, really hard to do, especially at scale. And that’s like a challenge with WordPress and why people might say, traditionally WordPress, like, it moves so slow. So when you think about the scale of an open source community, as large as it is, and how many millions of people it serves. Yeah, you have to move like diligently would be the best way to put it.

Kimberly Pace Becker:
You know, I also go back to something, you know, Volker Hegelheimer, he’s the, the department chair of the English department at Iowa State right now. But, but prior to that he was a professor and we probably both had him. I don’t know if you did, but I, I had a class.

Derek Hanson:
We had a class together with him. Yep.

Kimberly Pace Becker:
Yeah, okay. It was maybe in a computer assisted language learning class or something. I, I don’t, I don’t know. But he, One of the things that, that I learned from him is you. You don’t make tech into anything just for the sake of tech. It needs to have a real purpose. It needs to have a clear benefit to more than just the developer of it, the user of it. And it needs to be incremental and sustainable. And those two values, incremental and sustainable, really, I mean, we talked about that every day when Jessica and I would meet. Like, if we make this change, is it going to overwhelm our users? Will they be able to understand it? Will they be able to work with it, flow with it immediately, or is it an overhaul that is going to really set them back? And then if we implement it, can we sustain it over time? I just don’t think you can go wrong if that’s your. And now I know that’s what agile means because we hired developers, you know, and I, and I learned what that, that there’s this whole framework called Agile. And first I was like, what are you even saying? I, but that’s really all it is. It’s just, you know, this idea that things are modular and you build slowly and you, and you don’t want to build at all because you may have to go back and change this. And so this whole system might change. So systems theory. And there’s just so much that I think builders and developers have to share with the world and I think vice versa that teachers and you know, therapists and gosh, people who nurses, who work with any number of different populations can share. But yeah, I love what you said about it’s very interdisciplinary.

Derek Hanson:
Yeah, this is like, so you might not have been following but we’ll, and we’ll point our listeners to some of these resources. But the WordPress project has really made a concerted effort with an education initiative to bring open source into universities to kind of like build this ideology and ethos of open source into younger generations and into students. And like, this is my call right now to like researchers that are like, you know, like you said, like systems activity theory, like a lot of like, you know, fancy like academic sounded things that aren’t going to make too much sense to our listeners right now. But this is my call to those people. Like open source projects are ripe areas of research that we can all benefit and learn from each other. So I would love to see, you know, an upcoming research project, you know, centered around that. That would be really cool. So we are like literally, I think in kind of like a tsunami moment. Like people are like riding, you know, how do we ride the wave of AI and like it feels a little bit like a tsunami. Like in a lot of ways it’s kind of exhausting. And I’m just curious what, what you think, like this wave that everybody wants to surf with AI and software and development, like how do we like maintain staying on top of that wave and not risk getting pulled under it?

Kimberly Pace Becker:
Yeah, I think it’s really, really easy to buy into the hype and the, and, and the hype. The message from the hypers is something like don’t get behind, don’t let that person who knows AI take your job away. Don’t let AI take your job away. And it’s like this very scarcity, fear focused rhetoric. And I don’t, I mean, I think, I don’t know. Jessica and I used to say at Moxie, like she’s a, she’s a big believer in abundance. And at first I was like, okay, this is woo, woo. And I can’t get into it. Like I don’t, you know, but I mean, I think now I have a much more developed understanding of what she meant, which is there should be space for lots of different competing perspectives. Open source definitely espouses this like, but, but my fear is that in a world where big tech is building fast and breaking things, which is Mark Zuckerberg’s famous, famous words. And probably he just meant like, coding Bill fast and break the code. I don’t know what he meant. Let’s not read too much into it. Poor Mark Zuckerberg, he is just being crucified right now. I mean, but it’s. My concern is that if, if we don’t have some sort of like, regulation around this, that there won’t be that room for abundance and competition and smaller companies like Moxie or, you know, any company, any small business, especially small business, I mean, lots of small businesses use, use WordPress or they find that they try Squarespace, It Happened to us or Wix, and then they end up needing WordPress because it’s just so much more robust and you know, you can find, you can access documentation very easily. It’s a Google. You know, it’s, it’s just, I mean, we couldn’t do anything with Squarespace in terms of getting data for our users. That was just problem number one. But I, but I do, I want to believe that this is a world where we can focus on abundance and like, you know, not getting pulled under the wave. Not getting. And just think, well, what do they tell you if you’re at the beach and you get pulled under by a, a riptide? Like, swim parallel to the shore, like.

Derek Hanson:
Yeah, yep.

Kimberly Pace Becker:
You know, don’t, don’t run and jump on the beach and, and don’t try to swim around it or go way out and avoid it. Like, you gotta just keep your head on your shoulders and swim parallel to the beach. And that, and that I think is like the metaphor is like, maintain your integrity. What are you building for? Who’s benefiting? Who’s the end user? What do they really need? And not losing sight of that just to like, throw some fancy tech in for the sake of throwing fancy tech in. Because again, it, it wouldn’t be incremental and sustainable if the whole purpose is it’s just this shiny toy.

Derek Hanson:
Yeah, absolutely. Yep. That’s, that’s a really good, like, you know, kind of way to segue into like, kind of like my, like. One question to like, anybody who wants to contribute to open source, that’s like thinking about shipping a new AI feature or building a new AI, like product or anything in that vein. Like, what would that question be before anybody, like, hits publish or merges a pull request?

Kimberly Pace Becker:
I think it would be something about what does this tool do with uncertainty? Like, in the face of uncertainty in the code and what the user is going to input, does it surface the uncertainty? Does it hide it? Does it eliminate it? Like, if you can’t answer that question, if you don’t know what it’s going to do, when it inevitably faces a question of uncertainty, then you’re not ready to ship it. That is my big concern with AI, is that it doesn’t, you know, as a linguist, it is just not able to handle a lot of uncertainty. It is very confidently wrong. And you do not want that baked into the infrastructure of a WordPress site. That would that seem like it would be really bad news?

Derek Hanson:
Yeah, yeah. As you were talking about, like the difference between, like Squarespace and wix and like proprietary platforms are not open about what they build. Right. And because WordPress is open about what it’s built, that’s already like a great, you know, proving ground for AI to learn about the history of the entire project and for people to know. Like, AI can go to any resource and I can check what AI is doing against the history, you know, past and current of what, of what the open source project can do. So before we leave the women talking about AI podcasts, like, it’s grounded in the belief you mentioned, like, demographics. Like, women engage in AI with curiosity conscious and care and like, they make it wiser. And I think WordPress actually has done a pretty good job of, you know, bringing in those types of voices. We’ve had entire releases that are all women led, which I think is really cool. Cool, but like, those specific, like, ideas, curiosity conscious and care, like, what does that look like practically for someone in the WordPress community who wants to do AI? Right. I think you alluded to it with like, you know, looking at uncertainty. But like, what, what are those things that I think these are? I think these are really important. So how would you kind of leave people with some of those thoughts?

Kimberly Pace Becker:
I think my answer is about the same as it was when a graduate student or a moxie user would ask me, like, is this going to be, is this, is this going to pass muster? Is my professor going to use AI to detect this? And my question would be to them, like, well, can you orally defend it? Like, despite what the written word says, what the code says, what the. Could you. Can you stand up and can you, can you orally, meaning with your voice and no notes, can you defend not. Not just the accuracy of it, but the complexity and, and the integrity of it? And, and then ask, you know, does. Am I being more confident than I should? Just am I, am I, you know, in terms of that uncertainty, like, does this sound like me? Would I sign my name to this? You know, or. Or do I need to build in some sort of review step specifically to calibrate that. That uncertainty that I might have? Because, you know, we have this idea in tech, especially of friction. Friction is bad. Friction is always, you know, a step the user doesn’t want to take. But we’ve got to slow down and pause. Care means slow down, pause. That means you may have to introduce some friction. It’s not going to kill us to be uncomfortable. And that, I mean, that’s something we struggled with. We would ask all these contextual questions before you ever even got started using Moxie. You had to come to the table with these answers. And people hated it. People hated it. And our investors would be like, you cannot build so much friction into your tool. And we would say, sorry, but this is what academic research requires. And I mean, I would argue this is what real life requires. There’s not a fasting rigor. Yeah. You cannot get past. Friction is everywhere. It’s. We can’t run. You know, it’s like that whole the trope about you can’t go around it, you gotta go through it. Like you. It’s. It’s just not possible to avoid discomfort and friction. Sometimes it means it’s necessary.

Derek Hanson:
Absolutely. And what is always on the other side of discomfort? Right. Like joy and excitement and accomplishment and thinking about, like, if we’re talking about like files and, and training, you know, creating agents and stuff, I think you like, touch on something that’s like, really kind of like, valuable to think about. Like, we’re creating files for agents called Soul nd. Right. Like, we as humans need to maintain and bring the soul to everything we do.

Kimberly Pace Becker:
It’s a. Well, we don’t want to get too woo. But I think it’s ultimately going to be a spiritual problem. It is not a problem for tech to solve. Ultimately, this is going to be a spiritual problem for humanity. Yeah. The idea of, look at what all the world’s religions say, doesn’t matter, you know, what background you come from. But sitting in discomfort is a big part of being a human.

Derek Hanson:
Yep. Absolutely. Yep. If things were too easy all the time, there’s like no sense of, you know, success or accomplishment. Right. You gotta Live, you gotta be comfortable with being uncomfortable. And I think that’s a really good mantra to live by. So, Kimberly, this has been a great conversation. I was really thankful that you joined. It’s one I think our community really is going to benefit from and gives us a lot of questions to think about. So, like, really appreciate you bringing your research lens to the space of open source and you know, how we can, you know, hopefully we can take that and let it like incubate and grow within, you know, what we’re doing on a day to day basis. So I was really glad we could do this. Thank you.

Kimberly Pace Becker:
Yeah, it was fun.

Derek Hanson:
Yeah. Okay.

Kimberly Pace Becker:
So you cannot my wheelhouse, but I like being in the conversation.

Derek Hanson:
Well, that’s good. But that’s. This is all about like interdisciplinary, you know, and, and finding ways to, you know, have these bridges and conversations across disciplines, which I love. Like, I still have a heart for academia. Right. Even though I did not want to live in that space for my career, there’s still something about it that, yeah, there’s still something that I appreciate and love and want to see, like good collaborations. So. All right, for everyone listening, you can go find Kimberly Pace Becker. You can find her on LinkedIn. Are you on X or anywhere else like that or where can people go find you?

Kimberly Pace Becker:
LinkedIn’s my favorite. Social media. And then women talking about AI dot com.

Derek Hanson:
Okay, go subscribe to that podcast if you want to hear more from Kimberly and her co host Jessica. Everyone listening to this episode. If you’re getting value from Open Makers and Open Channels fm, be sure that you go like and subscribe to anywhere you’re getting your podcasts. And if you’re watching on YouTube, we are really trying to grow this channel. So if you like and subscribe, that’s going to help get the word out immensely. So we’re just getting started there and we appreciate any little bit that you can give. So thank you all for joining. Kimberly, thank you.

Kimberly Pace Becker:
Thanks, Derek. It was fun.

Leave a Reply

Logo of 'BackTalk' featuring stylized text with a blue and black color scheme, accompanied by sound wave graphics.

Get our newsletter, BackTalk, the sharpest ideas, honest moments, and quotable insights pulled straight from our conversations across OpenChannels.fm.delivered to your inbox every Wednesday.

Discover more from Open Channels FM

Subscribe now to keep reading and get access to the full archive.

Continue reading