ECREA

European Communication Research
and Education Association

Log in

ECREA WEEKLY digest ARTICLES

  • 07.05.2026 21:13 | Anonymous member (Administrator)

    September 17-18, 2026

    Online

    Deadline: May 15, 2026

    ECREA 2026 Post-Conference

    Organised by the ECREA Section Children, Youth and Media; ECREA TWG Aging and Communication; and CNSC – UOC research group

    This post-conference explores intergenerationality in contemporary mediated lives, focusing on how different generations interact, learn, and communicate across evolving media environments. It brings together scholars and practitioners to reflect on research, practices, and policies related to intergenerational communication. Topics include intergenerational research approaches, media use across age groups, digital literacy and inclusion, family communication, and critical perspectives on ageism and generational stereotypes.

    Call for Papers

    Abstract deadline: 15 May 2026

    Notification: 15 July 2026

    Submission form: https://docs.google.com/forms/d/e/1FAIpQLSfKci7psWj6RDZqq1Qo6Sn8Hxwxag_2F58iZirLGJKR1bmEkQ/viewform?pli=1

    More info: https://symposium.uoc.edu/149878/detail/connected-generations-media-communication-and-intergenerational-exchange-in-contemporary-lives-17-1.html

  • 07.05.2026 21:11 | Anonymous member (Administrator)

    CICANT

    CICANT – Centre for Research in Applied Communication, Culture, and New Technologies (https://cicant.ulusofona.pt/) invites applications from international researchers affiliated with foreign higher education institutions or research centres for short-term scientific stays in Portugal. These visits aim to foster international collaboration, strengthen research networks, and promote interdisciplinary knowledge exchange in the fields of communication, media, arts, and digital technologies. The stays may take place in Lisbon or Porto and should have a duration ranging from a minimum of seven to a maximum of fifteen days.

    CICANT researchers work across disciplinary boundaries, drawing on approaches from communication sciences and the arts to address the challenges posed by ongoing digital and technological transformations. Research developed at the centre explores how change emerges through the interplay between technologies, materials, and social imaginaries, as well as issues related to cultural participation and socio-cultural transformation. Additional areas of focus include populism, extremism, and contemporary forms of civic engagement, alongside critical approaches to identities, cultural and creative processes, and practices. These activities frequently involve practice-based research, with a strong emphasis on knowledge transfer through the production and dissemination of content and technologies relevant to diverse target audiences, as well as close articulation with existing doctoral programmes.

    Research at CICANT is organised into the following main thematic strands: Media, Society and Literacies, and Media Arts, Creative Industries and Technologies. These strands are operationalised through three Research and Learning Communities (ReLeCos): FLAME – Futures of Literacies, Audiences, Media and Democracy; MACIT – Media Arts, Culture and Creative Industries and Media Technologies; and SUST_MEDIA – Media and Transformations for a Sustainable Future. Each of these communities integrates several laboratories that are responsible for organising research and training initiatives, with a strong emphasis on the involvement of doctoral and, whenever possible, master’s students.

    Selected Visiting Researchers will be integrated into CICANT’s research environment and will have access to a shared workspace for researchers. They will also benefit from access to the University’s infrastructures and resources in support of the objectives of their stay. Visiting researchers are expected to actively engage with the centre’s activities, including presenting their work in the form of a seminar or workshop, and exploring opportunities for future collaboration, including joint publications and research projects.

    Applicants must be affiliated with a non-Portuguese institution and should hold a doctoral degree or, in duly justified cases, be advanced doctoral candidates. They must demonstrate a research profile aligned with CICANT’s areas of activity and present a solid academic track record.

    Applications must be submitted in English and should include a curriculum vitae (maximum five pages), a research proposal or work plan (maximum three pages) outlining the objectives, planned activities, expected outcomes, and relevance to CICANT’s research areas, a motivation letter, and an indication of the preferred period of stay. A letter of support or expression of interest from a CICANT researcher may be included and is encouraged.

    Applicants will be evaluated based on the scientific quality and relevance of the proposed work, the applicant’s academic profile and experience, alignment with CICANT’s strategic priorities, and the potential for collaboration and impact.

    Applications must be submitted electronically in PDF format by email to cicant@lusofona.pt indicating in the subject: Call for Visiting Researchers (2026) no later than 18:00 (Lisbon time) until the 15 of June 2026. Results will be communicated by email.

    The host institution will provide a grant of €1,500.00 to support travel and accommodation costs associated with the stay.

    Please note that visiting researchers remain responsible for arranging their travel, accommodation, and insurance unless otherwise specified. An official invitation letter will be issued to selected candidates, and the visit must take place within the agreed period.

    For additional queries: cicant@ulusofona.pt

  • 07.05.2026 21:10 | Anonymous member (Administrator)

    October 6-9 2026

    Aarhus University, Sandbjerg Estate

    Deadline: June 15, 2026

    The aim of this interdisciplinary scholarly retreat is to bring together researchers from all fields working with the intersection of AI and storytelling to reflect on and discuss how we study narratives that are no longer authored, circulated, or experienced exclusively by humans. 

    New practices of storytelling are emerging and existing ones are transformed with the popular uptake of LLM-based chatbots across professional, public, and recreational settings. Today, LLM-infused storytelling impacts all forms of textual practice: from art and creative writing to journalism, politics, and public debate over marketing, SoMe, and influencer culture to everyday conversations, therapy, and intimate interactions, to name a few. For scholars working with narratives, these developments pose fundamental challenges, several of which revolve around questions of method. We invite contributions that address issues of methodology in this evolving landscape of human-machine narration. 

    Relevant topics include, but are not limited to:

    • How can existing qualitative methods (e.g., close reading, positioning analysis, rhetorical analysis, interviews) and existing quantitative methods be adapted or revised to meet these new narrative practices?
    • How can narrative theory account for human–machine co-creation?
    • How can we analyze the ways in which interactions with LLM-based chatbots transform the generation and reception of core narrative elements such as story, discourse, and narrative act?
    • What kind of data do we need and what analytical approaches can be developed to understand narration with and to chatbots?
    • How do we analyze the way LLM-platform infrastructures (e.g., training data, guardrails, alignments) shape or create narrative practices?
    • How can core story-related concepts within narrative theory, literary theory, rhetoric, journalism studies, psychology, linguistics, and media studies such as ‘intentionality’, ‘causality’, ‘agency’, ‘purpose’, ‘author’, ‘meaning’, and ‘origin’ be rethought and investigated empirically in light of the current transformation?
    • What forms of data, evidence, and interpretation emerge through human-machine storytelling?
    • How do we work, critically and reflected, from a starting point of corporate and technological black boxing by Big Tech?
    • How can the interdisciplinary development of methods to unpack AI-storytelling be pursued and ensured?

    To start answering questions such as these, the seminar invites for contributions that may be exploratory or programmatic, fully formed or work in progress; the format is based around 30 minutes presentations from each participant. We hope the seminar will lead into a publication on methods, storytelling, and generative AI. Confirmed seminar participants and speakers are Alexandra Georgakopoulou (King’s College, UK) and Torsa Ghosal (California State University, US).

    Practical information

    • Seminar dates: 6-9 October 2026
    • Abstract submission deadline: 15 June 2026
    • Notification of acceptance: 25 June 2026

    The seminar is free and all accommodation expenses are covered. Travel expenses must be covered individually.

    Submission: Send abstract (max 250 words) and a short bio (max. 100 words) to norsi@cc.au.dk

    Organisers: Associate Professor Stefan Iversen, Assistant Professor Ann-Katrine S. Nielsen, Postdoc Pernille Meyer, all Aarhus University, Denmark

    Funding: The seminar is funded by the research project GAITS (IRDF 2026–2030)

    This HMN Seminar (short for Human-Machine narration) is organized by GAITS (https://projects.au.dk/gaits) and takes place at the Sandbjerg Estate in Southern Denmark October 6-9, 2026. It welcomes a limited number of participants to allow for in-depth discussions and shared conceptual development. The seminar is free and all accommodation expenses are covered. Travel expenses must be covered individually.

    Apply by sending a brief bio and a 250-word abstract, describing your ongoing work with methods, storytelling, and generative AI to Stefan Iversen (norsi@cc.au.dk) no later than June 15, 2026.

  • 30.04.2026 14:39 | Anonymous member (Administrator)

    May 14, 2026 (6:30 - 8:00 PM, followed by drinks)

    LSE & online

    A public lecture by the DFC 

    Major online safety regulations and legislation are now in force across the UK and EU. Platforms have new duties, regulators have new powers, and expectations are high. But what has actually changed for children?

    Bringing together leading voices from regulation, legal scholarship and child rights, as well as new research evidence, the event will reflect on how regulation reshapes platform design, governance and accountability in practice. 

    Speakers:

    • Steve Wood, PrivacyX Consulting, former Deputy at the Information Commissioners's Office (ICO) will present new research, followed by responses from the panel members:
    • Baroness Beeban Kidron, Crossbench Peer, House of Lords, UK Parliament and Chair of the Management Committee at the Digital Futures for Children centre
    • Professor Orla Lynskey, Chair of Law and Technology at UCL
    • Professor Lorna Woods, OBE, Emeritus Professor of Internet Law at the University of Essex

    Chair: Sonia Livingstone, Professor at the Department of Media and Communications, LSE and Director of the Digital Futures for Children centre

    Steve Wood: The research shows that regulation has yet to drive systemic change in safety and privacy by design for children. Instead, platforms are investing more in parental controls than in default protections. At the same time, we observe a rise in age assurance measures and early regulatory effects on AI services used by children.”

    More information & free registration: https://www.digital-futures-for-children.net/events/child-rights-regulation

    Recent from the DFC in case you missed it:

    African children's rights in relation to the digital environment: child-informed provocations to guide digital policy and practice - https://www.digital-futures-for-children.net/our-work/African-childrens-rights

    The impact of General comment No. 25 in the UNCRC monitoring process and around the world: https://www.digital-futures-for-children.net/our-work/impact-gc25

    DFC annual research insights day blog overview: https://blogs.lse.ac.uk/medialse/2026/04/14/childrens-rights-in-the-digital-environment-have-been-defined-now-they-need-defending/ 

  • 30.04.2026 13:56 | Anonymous member (Administrator)

    Edited volume (Anthem Press) by Ester Cristaldi

    Deadline: June 30, 2026

    Chapter proposals are invited for the edited volume Artificial Intelligence and Cultural Meaning: Language, Images and Interpretation in the Digital Age, under contract with Anthem Press.

    The volume examines artificial intelligence as a cultural, semiotic, social and media phenomenon. Rather than approaching AI only as a technical system or computational tool, the book investigates how AI participates in the production, circulation and transformation of meaning in contemporary digital culture.

    The central premise of the volume is that AI does not simply process information. It increasingly mediates how people write, read, see, classify, imagine, remember and interpret the world. AI systems generate texts and images, organise visibility, shape public attention, classify social subjects, predict behaviour and participate in the construction of cultural narratives.

    The book is grounded in semiotics and linguistics, but it also welcomes interdisciplinary perspectives from cultural studies, media and communication studies, media sociology, digital sociology, digital humanities, visual culture, platform studies, critical data studies, journalism studies, environmental humanities, science and technology studies, and related fields.

    Topics

    Possible topics include:

    • AI, language and meaning
    • Large language models and linguistic authority
    • AI and language inequality
    • AI-generated images and visual culture
    • Synthetic media and visual disinformation
    • AI, public trust and the crisis of mediation
    • AI, platforms and public attention
    • Algorithmic visibility and digital inequality
    • AI, datafication and social classification
    • AI, creativity and cultural production
    • AI, cultural labour and the creative industries
    • AI, archives and cultural memory
    • AI, embodiment, interfaces and everyday experience
    • AI, environment, infrastructure and digital materiality
    • AI, interpretation and cultural authority
    • AI, media ecologies and affective publics
    • AI, memory, archives and the digital humanities

    Submission Guidelines

    Interested contributors are invited to submit:

    • a provisional chapter title;
    • an abstract of approximately 250–300 words;
    • a short biographical note of approximately 100 words;
    • institutional affiliation and contact details.

    Full chapters will be expected to be approximately 6,000–8,000 words, including references.

    Timeline

    • Proposal submission deadline: 30 June 2026
    • Notification of acceptance: 15 July 2026
    • Full chapter submission: 30 November 2026
    • Editorial feedback: January 2027
    • Revised chapter submission: 28 February 2027
    • Final manuscript preparation: March–April 2027

    Submission

    Chapter proposals should be sent to:

    Maria Pia Ester Cristaldi

    Üsküdar University

    mariapia.cristaldi @uskudar. edu.tr

    Please include “Chapter Proposal – Artificial Intelligence and Cultural Meaning” in the subject line.

    Contact Information

     Ester Cristaldi, Üsküdar University, mariapia.cristaldi @uskudar.edu.tr

    Contact Email: mariapia.cristaldi@uskudar.edu.tr

  • 30.04.2026 13:50 | Anonymous member (Administrator)

    May 13, 2026

    This webinar aims to provide a much-needed focus on disability studies and media in Africa and to share critical insights from the works of various scholars and practitioners who focus on disability and media in Africa while introducing and centering important voices from African scholarship on disability and media. 

    Speakers will focus on topics such as “Between Tragedy and Miracles: Personal reflections on encountering blindness narratives in media and the development of authentic identity”, “Community Media as a Tool for Disability Empowerment in Rural Africa”, “The role of the media in shaping attitudes about albinism in Sierra Leone” as well as “Disability, Normalcy, Difference, and Eugenic Thinking”.

    Sponsored by the Inclusive Communication and People with Disabilities (ICO) Working Group, this webinar creates a space to share work, identify gaps and spotlight important contributions in African disability studies and media scholarship.

    When: Wednesday, 13 May 2026 @ 11:00 UTC / 12h00 London / 13h00 Paris / 14h00 Nairobi / 16h30 Kolkata / 19h00 Beijing / 21h00 Brisbane. The event will last 2 hours.

    Pre-registration is required by 11 May. Register at https://iamcr.org/webinars/register-media-and-disability 

    Sponsored by: IAMCR's Inclusive Communication and People with Disabilities (ICO) Working Group

    Organisers

    • Ngozi Marion Emmanuel, Research Assistant, Sir Lenny Henry Centre for Media Diversity, Birmingham City University, UK
    • Lorenzo Dalvit, Professor of Digital Media and Cultural Studies, School of Journalism and Media Studies, Rhodes University, South Africa
    • Bimbo Fafowora, Postdoctoral Research Fellow, School of Journalism and Media Studies, Rhodes University, South Africa

    Moderator

    Ngozi Marion Emmanuel, Sir Lenny Henry Centre for Media Diversity, Birmingham City University, UK

    For more, see at https://iamcr.org/webinars/media-and-disability  

    Register at https://iamcr.org/webinars/register-media-and-disability 

  • 30.04.2026 13:44 | Anonymous member (Administrator)

    A Call for Book Chapter Proposals

    Deadline: May 15, 2026

    We are pleased to share this call for book chapter proposals for Teaching AI, to be published open access by EdTech Books. Abstracts (250 words) are due May 15, 2026. Authors will be notified no later than May 29, 2026. Accepted chapters will be due by July 1, 2026. The book will be published in Fall 2026. Full details are below, but please feel free to contact us with questions or to submit your proposal at rferdig@gmail.com.

    We recognize this is an ambitious timeline. However, because each chapter follows a structured template and draws directly from courses you are already teaching, we believe the turnaround is manageable. Accepted authors will receive the full template upon notification and can expect chapters to run approximately 4,000-6,000 words.

    Best, Richard E. Ferdig (Kent State University), Richard Hartshorne (U. Central Florida), Enrico Gandolfi (KSU), Laurie O. Campbell (UCF), and Jennifer Petit (KSU)

    Purpose

    Artificial intelligence is not new. Faculty across computer science, cognitive science, information systems, engineering, and related fields have been teaching AI for decades, building courses, developing pedagogical approaches, and preparing students for a world increasingly shaped by intelligent systems. What has changed in recent years is not the existence of AI but its visibility, its accessibility, and its reach. AI is now part of nearly every discipline and nearly every conversation about the future of education, work, and society.

    And yet, despite this breadth, we do not always share what we know. Syllabi go unread beyond individual institutions. Pedagogical decisions made through years of trial and error stay locked in one classroom. Faculty building new AI courses, often under significant institutional pressure and with little time, are reinventing wheels that their colleagues across campus or across the world have already built.

    The goal of Teaching AI is to fix that. This edited collection brings together faculty who teach AI (in any discipline, at any level, in any context) to share their syllabi, their teaching strategies, their hard-won best practices, and their vision for where AI education is headed. The result will be a single, rich, open-access resource for anyone teaching AI or thinking about doing so.

    This is not a collection about AI in the abstract. It is a collection about the concrete, practical, and deeply human work of teaching AI to students. We are as interested in the instructor who has been teaching machine learning since the 1990s as we are in the instructor who launched an AI literacy course last semester. Both have something essential to contribute.

    Each chapter follows a shared template and includes multiple components: course purpose and objectives, disciplinary context, pedagogical approach, AI ethics and academic integrity, course texts and technologies, assignments, an expanded course outline, best practices, and future directions. Chapters will be organized by content area, and that organization will emerge from the submissions themselves.

    For a sense of what this format looks like in practice, we encourage prospective contributors to review our related collection, Teaching the Game (Volumes 1 and 2), available free of charge at Volume 1 and Volume 2. While that collection focuses on gaming education, the chapter format, voice, and scope are directly analogous to what we are building here.

    Areas We Especially Welcome

    Any standalone AI course (i.e., discipline-specific or designed for a general audience) is eligible for consideration. We welcome submissions from institutions around the world and across every level of instruction within higher education. 

    To ensure the collection reflects the full breadth of how AI is being taught, we are particularly interested in courses that represent the following areas. This is not an exhaustive list but rather an invitation. If your course does not appear here, that is not a reason to hesitate. It may be exactly what this collection needs.

    • AI literacy and general education. Courses designed for students across majors that build foundational understanding of what AI is, how it works, and what it means for society. We welcome both introductory survey courses and more advanced treatments of AI for non-specialists.
    • AI ethics, policy, and governance. Courses centered on the societal, legal, and ethical dimensions of AI (i.e., bias, accountability, transparency, regulation) and the responsibilities of those who build and deploy intelligent systems.
    • AI and human interaction. Courses exploring how humans and AI systems work together (i.e., human-centered AI design, explainability, trust, accessibility, and the user experience of intelligent systems).
    • Generative AI and creativity. Courses built around generative tools and their implications for art, music, writing, design, and other creative disciplines (including both technical and critical approaches).
    • AI for soft skills. Courses that address how AI and generative AI can be used to develop and strengthen competencies such as collaboration, teamwork, self-efficacy, and empathy (etc.) across disciplines and professions.  
    • AI and the workforce. Courses focused on how AI is transforming professional practice, career preparation, and workplace dynamics across industries.
    • AI and society. Courses that examine AI's broader cultural, political, and societal impacts, including surveillance, misinformation, democracy, and questions of power and equity.
    • Discipline-specific AI. Courses that examine what AI means within a particular field (i.e., health, law, education, business, journalism, the arts, and beyond). Teaching AI in nursing looks fundamentally different from teaching AI in computer science or communications, and those differences are exactly what this collection wants to capture.
    • Technical and applied AI. Courses focused on building, training, and deploying AI systems (i.e., machine learning, deep learning, natural language processing, computer vision, and related areas) with particular interest in how instructors make technical content accessible and pedagogically meaningful.
    • International and global perspectives. AI is developed, deployed, and experienced differently across cultures, regions, and political contexts. We actively encourage submissions from institutions outside North America and Western Europe, and from courses that engage critically with global dimensions of AI.

    One important note: we are specifically seeking standalone AI courses. These are courses in which AI is the primary subject. Courses that include an AI module or unit within a broader curriculum are outside the scope of this collection.

    Details

    To be considered, please submit a 250-word abstract by May 15, 2026, that includes:

    • Author name(s), institutional affiliation(s), and email address(es) 
    • Title of course 
    • Course keywords: content area, level (e.g., undergraduate, graduate, professional development), and delivery mode (e.g., online, face-to-face, hybrid) 
    • Brief description of the course, including its context, how long it has been taught, and any ways it has evolved in response to recent developments in AI

    Full chapters will be due July 1, 2026. Accepted authors will receive a complete updated chapter template. The book will be published open access with Creative Commons licensing by EdTech Books.

    Please send proposals and any questions to rferdig@gmail.com.

  • 30.04.2026 13:37 | Anonymous member (Administrator)

    Global Media and China

    Deadline: May 20, 2026

    We are pleased to announce a Call for Papers for a forthcoming special issue titled “AI, Algorithmic Media, and Digital Governance: Power, Control, and Technological Transformation,” to be published in the journal Global Media and China.

    The accelerating integration of artificial intelligence (AI) into digital infrastructures represents a profound transformation in contemporary media environments and governance systems. AI-driven platforms, algorithmic recommendation systems, and automated content moderation increasingly shape how information circulates, how public discourse is structured, and how political authority is exercised across different societies. These developments raise important questions about algorithmic governance, digital sovereignty, media regulation, and the broader political implications of AI-mediated communication.

    This special issue seeks to advance interdisciplinary scholarship examining the evolving relationships between AI technologies, media systems, and governance practices. We welcome contributions that critically explore how algorithmic systems influence media production, platform governance, public communication, and political power across diverse institutional and geopolitical contexts.

    We invite empirical, theoretical, and methodological contributions from scholars working in communication and media studies, political science, digital governance, sociology, science and technology studies, and related disciplines. Submissions may focus on China, or adopt comparative and transnational perspectives.

    Possible topics include, but are not limited to:

    • Algorithmic governance, digital statecraft, and political authority
    • AI-driven propaganda, information manipulation, and computational misinformation
    • State-led AI governance and digital surveillance regimes
    • Platform politics and the political economy of algorithmic systems
    • Public perceptions of AI and the politics of digital rights
    • AI infrastructures, technological sovereignty, and global asymmetries in digital power
    • Smart cities, Internet of Things systems, and algorithmic governance in public administration

    Key dates

    • Abstract submission deadline: 20 May 2026
    • Notification of invitations for full papers: 1 June 2026
    • Full paper submission deadline: 30 October 2026

    Please submit an abstract of up to 500 words to the guest editors with the subject line “GMAC Special Issue Submission.”

    Guest Editors:

    Dechun Zhang, University of Copenhagen (dezh@hum.ku.dk)

    Weiai Xu, University of Massachusetts Amherst (weiaixu@umass.edu)

    Han Lin, Soochow University (linhan741@gmail.com)

    Full details of the Call for Papers can be found here:

    https://journals.sagepub.com/pb-assets/cmscontent/GCH/Algorithmic%20Media_CFP-1773117974170.pdf

    We would greatly appreciate it if you could circulate this call among colleagues, research groups, and academic networks who may be interested.

    Thank you for your attention, and we look forward to receiving your submissions.

  • 30.04.2026 13:30 | Anonymous member (Administrator)

    June 4, 2026, 8:30–12:00 (UTC+2)

    Cape Town, South Africa (in person) 

    Deadline: May 4, 2026

    We are delighted to announce that registration is officially open for the ICA 2026 pre-conference: https://www.icahdq.org/event/Childrens (deadline, 4 May 2026)

    Why attend?

    This half-day, workshop-style pre-conference will bring together scholars and practitioners to explore how research can inform policy, regulation and child rights-respecting design in digital environments.

    Keynote Speaker – Professor Ann Skelton

    University of Pretoria & University of Leiden, Former Chair, UN Committee on the Rights of the Child.

    What to expect:

    The programme encompasses an exceptional breadth of scholarship relating to children’s rights, ranging from AI governance, platform power and digital labour to youth activism, digital violence, age-based bans, family mediation, gaming ecosystems and data protection. The conference discussions will be grounded in rich empirical work from across Africa, Latin America, Asia, the Middle East, Europe and Australia.

    Registration Details

    Fee: $35

    Fee waivers available for students and participants from UN third-tier countries. If this applies to you, please email us to obtain a waiver: info@info@dfc-centre.net

    ICA membership or main conference registration not required

    About

    We invite scholars, practitioners, policymakers, and civil society actors to join us for the preconference to discuss how research can guide policy, regulation, and digital design, and how Global South perspectives can strengthen and reshape international debates within the framework of the UN Convention on the Rights of the Child and General comment No. 25.

    The pre-conference is organised by Digital Futures for Children, a joint research centre at LSE with 5Rights Foundation, in association with the ICA divisions Children, Adolescence and Media and Communication Law and Policy. For further information, visit https://www.digital-futures-for-children.net/events/ica/preconference 

  • 30.04.2026 13:16 | Anonymous member (Administrator)

    Alongside the academic program, participants are invited to take part in a carefully curated program of guided tours and cultural events, designed to offer a deeper and more distinctive engagement with Brno. Extending beyond standard sightseeing, the program provides access to experiences that are rarely available to visitors: the tours cover key highlights of the city, but they also offer a unique look at some of Brno's architectural marvels, including the functionalist Villa Tugendhat or the city water tanks. Cultural events include English-friendly theatre performance, film screenings in a functionalist church, and a workshop on sustainable analog photography. These events are designed to offer not only cultural insight but also access to spaces and experiences that are often not easily available, even to local residents.

    Participants can sign up for these activities during the conference registration process. As capacity is limited and many of these events tend to fill up quickly, we strongly encourage you to secure your spot early, before 31 May 2026.

    More information about the program is available here: https://ecrea2026brno.eu/tours-culture-workshops/

ECREA WEEKLY DIGEST

contact

ECREA

Antoine de Saint-Exupéry 14
6041 Charleroi
Belgium

Who to contact

Support Young Scholars Fund

Help fund travel grants for young scholars who participate at ECC conferences. We accept individual and institutional donations.

DONATE!

CONNECT

Copyright 2017 ECREA | Privacy statement | Refunds policy