Seale, Draffan, & Wald, 2010

Digital agility and digital decision-making: conceptualising digital inclusion in the context of disabled learners in higher education

Seale, J., Draffan, E. A., & Wald, M. (2010). Digital agility and digital decision‐making: conceptualising digital inclusion in the context of disabled learners in higher education. Studies in Higher Education, 35(4), 445-461.

The purpose of this study was to explore digital agility and digital decision-making for students with disabilities in the context of a specific project—LEXDIS. Participants were 31 students in higher education in a school in the United Kingdom. The students were younger than 20 years old. The study included 17 female and 14 male students. This was a participatory framework where individuals with disabilities were involved as consultants or designers (or they chose to disseminate the final work).

The research design consisted of three phases — online survey, interview, and focus group. The intervention was learning about, using, and participating in the design of LEXDIS. Outcomes were coded and mapped against a framework of digital inclusion: resources and digital decisions, which were categorized as technology, personal, and context (social). The framework was designed to capture digital inclusion beyond accessibility and knowing how to do things with digital tools. In other words, it was designed to capture the complex, multi-layered nature of digital inclusion.

Technological = physical and material resources
Personal = human or mental resources
Contextual = temporal, social, or cultural resources

It was revealed that all students customised their computer digital devices (icons, colors, etc.), Most students owned a phone and laptop. Most students used instant messaging, discussion forums, social networking sites, and uploaded videos or photographs onto the Internet. All students Google or other search engine to access information and had used online learning materials. They also used word processing programs (google docs), spreadsheets, and email.

Students described many strategies in using digital tools. They expressed a high level of confidence in their usage.

Factors that influenced student usage of technologies: technological factors (affordances), personal factors (feeling stigmatized when using assistive technologies in public). Some students reported that they did not use social networking because it takes them “twice as long as everyone else to do it” (speaks to perceived value)

Digital agility of students was identified in the study. Researchers encourage educators to avoid seeing students with disabilities as victims of exclusion. They support an empowerment model.



| Leave a comment

Smith, Schmidt, Edelen-Smith, & Cook, 2013

Pasteur’s Quadrant as the Bridge Linking Rigor with Relevance

Pasteur’s quadrant refers to a section of the Quadrant Model of Scientific Research introduced by Donald E. Stokes (1997). The model features the work and work-habits of 3 inventor/researchers: Bohr, Pasteur, and Edison.

Just to give you some background information, we are all aware of a tension between teachers and researchers. Even if you do not notice it in your own work, you might see it in other classrooms or with other researchers.

In this paper, the researchers explore the tension between teachers and researchers through the lens of discourse theory. Discourse theory is studied widely in communication because it recognizes the fact that “language alone cannot account for meaning” in communication. Discourse theory takes into account the discourse community, which describes a group of folks who use similar language to communicate. The researchers describe a discourse community as an “annointed guardian of the truth,” and thus, the language they use to describe and discuss knowledge and knowledge production–words, emphases, syntax, etc.–define them as a part of that discourse community. We use discourse communities to understand how people belong together. We are trying to understand who to include; however, when we include many people, we also exclude others.

Among the discourse communities of teachers and researchers, there is a sense that teacher knowledge/wisdom is a separate and lesser category of knowledge. This can create an indignation against researchers for excluding teachers from their discourse community. It may even give rise to a “resistance culture” wherein knowledge production is limited or even halted by the actions of members of both discourse communities. In these situations, it is important to recognize–it’s not that teachers don’t like research or evidence-based practices (EBPs), they simply value a different approach. Teachers tend to value practice-based evidence (PBE), which is relevant and externally valid while researchers value EBPs, which are rigorous and internally valid. Because discourse communities are closely linked to identity and trust, it is unlikely that logical appeals for everyone to “just get along” will be enough to integrate EBP and PBE.

Smith et. al say we don’t have to choose. They present the Stokes (1997) model, which introduces research as a synergystic model using a both/and approach rather than a either/or approach. In the model, we have Bohr (developed the modern model of the atom) as a quest for knowledge without consideration of usage. Edison (who invented lots of things) represents the view that deeper scientific knowledge is secondary to development and application of useful products. Pasteur brings these together with scientific efficacy and real-world effectiveness. In the model, practice and research are complimentary. This is a use-inspired basic research model.

So, how is this done? How can we implement this model? The researchers suggest we use Education Design Research (EDR) and implement it through Communities of Practice. This is not suggested as an actual methodology by the authors, but a “framework for a range of methodologies.”


External validity:  the validity of generalized (causal) inferences in scientific research, usually based on experiments as experimental validity. In other words, it is the extent to which the results of a study can be generalized to other situations and to other people.

Internal validity:  how well an experiment is done, especially whether it avoids confounding (more than one possible independent variable [cause] acting at the same time). The less chance for confounding in a study, the higher is its internal validity.

Education Design Research:  developed by Brown (1992) and Collins (1992). Suggests that “rigorous education research should take place in complex educational environments.”

Communities of Practice:  groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly (Wenger-Traynor, n.d.).

EDR & Communities of Practice

Characteristics of EDR/CoP

  • Iterative
  • Does not ignore contextual variables, which are complex and unpredicatable (traditional research seeks to control these)
  • Flexible
  • Interaction occurs in a Community of Practice
  • Synergy through sharing
    • values
    • goals
    • practices
  • Common purpose for teachers and researchers
  • Internal motivation to solve problems
  • Participatory relationships

Stages of EDR

  • analysis/exploration
  • design/construction
  • evaluation/reflection



| Leave a comment

Klingner, Boardman, & McMaster, 2013

What Does it Take to Scale Up and Sustain Evidence-based Practices?

Implement, validate, and scale up
Starting large research projects can be cumbersome. A more agile approach might be to become more sensitive and react to contextual factors.

Scaling up is a process by which interventions are implemented small-scale, validated, and then implemented on a larger scale. IES defined a need for understanding the organizational conditions needed to support an intervention and determine the effects of selected moderators of the intervention. Even schools that implement successfully struggle with sustainability due to competing priorities, changing demands, and teacher/staff turnover.

Dunlap (2009) + Coburn (2003) = emergence, demonstration of capacity, elaboration, system adoption and sustainability.

Persistence after funding is low, even when the results are good. Cultural changes are required and local contextual features of an educational system must be acknowledges and managed. Other struggles occurred when scaling up was the domain of education policymakers, not researchers/teachers/administrators (but of course, right?) Funding agencies have also been challenging because they did not account for local complexities. IES funding structures typically wanted a “standard” implementation. They did not want to implement in an ad-hoc manner. From my experience in software development and implementation, I’d completely agree with that approach. Ad-hoc implementations lose the power of economies of scale and become troublesome/expensive to maintain. In software, we required business units to adapt to us, while education is a completely different deal.

Implementation science may be the missing link between standardized practices and successful implementations. Implementation science addresses adoption decisions, capacity building, training, technical assistance, consumer participation and satisfaction.

*under what conditions and with whom does an ebp work
*why is it necessary to support teacher implementation of an ebp
*what is necessary to increase the capacity of districts under different ecological and population differences
*what is necessary to support deep, broad, sustainable implementation of the ebp

*Classwide Peer Tutoring (CWPT)
*Peer Assisted Learning Strategies (PALS)
*School Wide Positive Behavior Supports (SWPBS)

Success factors
*Maximizing contextual fit betw ebp and educational environment
*Promoting ebp as a priority
*Ensure fidelity of implementation
*Increase efficiency by integrating ebp into daily school operations
*Use data to make ongoing decisions about the ebp


It is important to strike a balance between implementation fidelity and teacher flexibility
Factors that can support it: PD and district leadership

When we fail to scale-up interventions, the result is a gap between research and practice.
*Lack of trust
*Not valuing the input of all stakeholders equally
*Different beliefs and philosophies
*Tendency to dismiss evidence that does not support our pre-existing views
*sometimes a practice is overstated in effectiveness or generalizability

PD on scale-up is highly recommended


| Leave a comment

Cook & Cook, 2013

Evidence Based Practices and Implementation Science in SPED

Evidence-Based Practices and Implementation Science in Special Education
The value of EBPs is limited by the quality, reach, and sustainability of implementation practices. The gap between research and practice in sped is persistent and confounding to all who care about children’s futures. Some combination of EBPs and Implementation Science may help bridge the gap at some point. The challenge to discover the “secret sauce” is ongoing.


| Leave a comment

Barton & Smith, 2015

Advancing High-Quality Preschool: A Discussion and Recommendations for the Field

The crux of this article (in the context of 8304) is its recommendation to use an Implementation Science Framework to increase inclusion of pre-school students with disabilities in classrooms with typically-developing children. Implementation Science explores how a particular evidence-based policy can be successfully implemented in an educational system. It suggests there are particular leadership and organization supports that increase the chances of facilitating lasting change in educational systems. For example, the following practices were recommended: creating work groups to focus on identifying local policy barriers to inclusion, appointing community leaders to address attitude and belief challenges in the local population, and enlisting state directors of special education in establishing short and long term goals related to inclusion.

The implementation science angle reminds me of the work I did in a course on UDL. The final project in the course was developing a system-wide plan for change and support of implementation of UDL principles. It seemed widely recognized that one educator, one principal, even one school board member could not make the change to using UDL as an educational foundation on their own. Instead, the entire system must be revamped to reflect the iterative and change-oriented atmosphere to support its implementation. In addition, the system of implementation recommended in the course was to use UDL to implement UDL How meta, yes? This was an interesting feature, which perhaps does not apply to all implementations of change. Even so, the most important part of the final project was engaging all parts of a system in the change (using UDL to implement UDL). Most importantly, continuous PD was emphasized as crucial to sustaining interest and enthusiasm. I wonder if this method worked because of the “using UDL” part or the “systemic engagement” part (as an instantiation of a successful Implementation Science approach).


| Leave a comment

Fixsen, Blase, Metz, Van Dyke (2013)

Statewide Implementation of Evidence-based Practices

This article presents a reality, which all researchers, practitioners, and policy-makes must confront: evidence-based practices are only as successful (from an outcome perspective) and their implementation and acceptance into the micro-culture of a place of learning and the macro-culture of an educational system. The researchers define 3 anecdotal categories of implementation of educational evidence-based practice: letting it happen, helping it happen, and making it happen. An example of letting it happen is publishing enough studies with enough positive results to define an evidence-based practice, and then hoping it catches on in an educational environment somewhere. An example of helping it happen is creating/hosting PD for educators and/or administrators without follow-up or thought to the complex contexts of educating students with disabilities. The last category, making it happen, involves implementation science and successful scaling up.

The researchers present an implementation framework that relies on an implementation team of practitioners, innovations, and students. Management teams at the State level are encouraged to prevent establishment of “islands of excellence.” Difficulties in implementation are discussed—facilitating change from inside a system, new practices being drowned out by established practices, and wicked problems. It was also noted that the term “program” has not been defined, so it is difficult to define the delimiters in implementation efforts.

In order to scale up evidence-based interventions, the authors recommend first scaling up implementation capacity. A general investment in implementation capacity was emphasized for improved outcomes in evidence-based educational research.

| Leave a comment

Goldstein, 2014

This article describes an evaluation approach that can be used on both group and single-case research designs (SSED). The approach evaluates types of treatments, research designs, quality of studies and effect sizes. The study follows the graphical presentation of products evaluated by Consumer Reports magazine (no, really–and, it’s cool!).

Research studies were evaluated according to the following criteria:

  • Design characteristics and general validity (and internal validity for group methods)
  • Measurement and reliability (assessment methods, implementation fidelity)
  • General characteristics and results (rationale, treatment effects, statistics)
  • Dimensions of external validity (such as social validity)

Advantages of this technique are:  patterns can be detected, the presentation of the data is clear, and the study is accessible to many researchers (not just quant folks).

Disadvantages of this technique are: inter-rater issues (complexity and training), categories are quite general.

| Leave a comment