1/28
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Now that your study is completed, how did the actual implementation validate or challenge your original assumptions?
Our initial assumption—that students were aware of energy policies through AFYOP—was only partially validated. While awareness was moderate to high, actual behavioral adherence was lower, showing a gap between knowledge and practice. This confirmed that awareness alone isn't enough—nudges must also target ability and motivation, as the FBM predicts.
What unexpected challenges did you encounter when translating your theoretical framework into actual data collection?
One challenge was operationalizing abstract FBM variables into measurable survey indicators. Students also tended to overreport "green" behaviors due to social desirability bias. We mitigated this by cross-verifying awareness and adherence responses for consistency.
Why did you decide to keep the focus on AdDU instead of expanding to other universities after your proposal defense?
We wanted to maintain internal validity and control for institutional culture, since AdDU's sustainability policies are already structured and integrated into student life. Expanding outward risked introducing policy heterogeneity. Focusing on one campus allowed a more precise behavioral analysis.
How did your findings strengthen AdDU's sustainability mission or provide feedback to institutional policy?
Our results identified which nudges—particularly priming cues—were most behaviorally effective, allowing policy refinement without additional financial burden. The findings provide evidence-based feedback to Ecoteneo and the PPO for improving campaign placements and visibility. It reinforces that behavioral design matters as much as policy content.
Looking back, do you think nudging is enough for behavior change in a university setting?
Nudges are a strong starting point but not sufficient alone. Long-term change requires institutional reinforcement, consistent prompts, and social modeling. Nudges work best when embedded within a broader culture of environmental accountability.
How did you ensure the reliability and validity of your survey instrument?
We adapted indicators from validated FBM-based instruments and had them reviewed by behavioral and economics faculty. A pilot test helped refine ambiguous questions, and Cronbach's alpha confirmed internal consistency. This ensured both construct validity and statistical reliability.
Why did you stick with a cross-sectional design instead of longitudinal tracking?
Given academic time constraints, longitudinal tracking was impractical. However, our model's inferential structure allowed predictive interpretation of behavioral patterns at one point in time. Future research could build on this with repeated measures to observe behavior over time.
Can you explain how the logit model captured behavioral change?
The logit model estimated the probability that a student exhibits energy-saving behavior (1) given their motivation, ability, and prompt scores. Coefficients showed the strength of influence of each variable. The model's predictive accuracy demonstrated that institutional prompts significantly increase behavioral likelihood.
How did you control for possible biases in self-reported behavior?
We included awareness and perception variables as control factors and used consistency checks between sections. Patterns of contradiction (e.g., high awareness but zero adherence) helped flag overreporting. We also anonymized responses to reduce social desirability bias.
Why was stratified sampling still appropriate post-proposal?
It allowed proportional representation across AdDU clusters, preventing academic program bias. Maintaining the same sampling framework ensured continuity and comparability from the approved proposal to final execution. This preserved methodological rigor and balanced respondent diversity.
Which factor among motivation, ability, and prompt had the strongest effect?
Prompts had the highest predictive power, particularly environmental cues and visible reminders. This validates the Fogg Model's principle that even with moderate motivation, timely prompts can trigger behavior. Essentially, the environment "nudged" more effectively than self-reported values did.
Were there differences across schools or year levels?
Yes. Students from SEA and SAS showed higher adherence, possibly due to more exposure to sustainability content in coursework. Senior students also demonstrated slightly higher behavioral consistency, likely due to longer institutional immersion.
How did the results compare between social norm and priming nudges?
Priming nudges outperformed social norm nudges in effect size and perceived influence. Visual cues like posters and reminders were more consistently internalized than peer-based behavioral comparisons. This suggests that physical environment design has more staying power than social influence alone.
Did your regression model encounter any major statistical issues?
No significant multicollinearity or heteroskedasticity was detected. Model fit was within acceptable pseudo-R² bounds, and all key predictors were significant at the 0.05 level. This strengthens confidence in our results' robustness.
What behavioral trend surprised you the most?
Students rated environmental campaigns as "important" yet rarely followed through with daily actions. This mismatch underscored how awareness campaigns need direct prompts to translate intention into practice. It reinforced that motivation without simplicity or reminders leads to behavioral inertia.
How do your results inform actual policy improvement at AdDU?
They provide clear evidence that behaviorally informed cues—like sign placements and targeted campaigns—produce measurable behavioral outcomes. AdDU can integrate these findings by redesigning communication strategies and default settings in facilities. It's a policy-level justification for using behavioral insights in sustainability management.
If you were to redo the study, what would you change?
We'd add a longitudinal component to track whether behavior persists over time and incorporate experimental manipulations of nudge exposure. This would strengthen causal inference and minimize self-report bias. It would also allow us to capture adaptation and "nudge fatigue."
What broader implications do your results have beyond AdDU?
The findings can guide other universities or local government offices aiming to implement low-cost, behavior-based energy policies. They show that cultural framing and behavioral design can significantly enhance sustainability compliance. This aligns with SDG 7 and behavioral governance strategies.
What was the strongest validation of your hypothesis?
Our regression confirmed that nudges significantly influence the likelihood of energy-saving behavior, rejecting the null hypothesis. Both social norm and priming interventions had distinct but positive effects. It quantitatively verified that behavioral design can drive measurable environmental impact.
What's your main takeaway as researchers after completing this study?
Behavioral change isn't just about awareness—it's about design. Even small adjustments in context or messaging can shift collective habits. For us, it proved that sustainable development is as much about psychology and behavior as it is about economics and technology.
How did you ensure that your variables did not suffer from multicollinearity?
We ran a Variance Inflation Factor (VIF) test, and all values were below the threshold of 5. This confirmed that motivation, ability, and prompt were distinct constructs without strong interdependence. Hence, the regression coefficients remained interpretable and unbiased.
Did your results support the concept of “nudge fatigue”?
Slightly. Some respondents reported that repetitive visuals became less noticeable over time. It suggests that nudge effectiveness may plateau unless the cues are periodically refreshed or combined with motivational reinforcement.
. What behavioral trend stood out the most in your findings?
Students tended to overrate their own adherence to sustainable practices. This self-enhancement bias, common in environmental behavior studies, revealed a perception gap between what students believe they do and what they actually practice. It highlights the need for more objective behavioral monitoring.
How did motivation specifically influence energy-saving actions?
Motivation alone was not a strong standalone predictor, but it significantly interacted with prompts. Highly motivated students responded more consistently to visible cues. This shows that motivation amplifies, rather than replaces, the effects of environmental design.
Did income or expenditure levels correlate with adherence?
Not significantly. Energy-saving behaviors like turning off lights or avoiding unnecessary elevator use require little to no cost, so they are not strongly tied to income. This implies that behavioral change in this context is more psychological than economic.
. How did qualitative responses support your quantitative findings?
The qualitative data from open-ended survey items echoed our regression results—students frequently cited posters, reminders, and campaigns as the main influences on their actions. Comments also revealed that peer imitation and habit formation reinforced the nudges’ effects. This triangulation strengthens the internal validity of our findings.
How do you interpret the difference between policy-based and behavior-based efficacy ratings?
Behavior-based interventions (like visual cues) scored higher than policy memorandums alone. It suggests that passive policy communication is less impactful without contextual reinforcement. In short, visible action reminders outperform administrative directives in driving real behavior.
What did the results of that test show on the policies vs behavioral nudges?
The t-test indicated a significant difference (p < 0.05), showing that behavioral nudges received higher mean efficacy scores than policy measures. This suggests that subtle, behavior-based interventions resonate more with students than top-down institutional rules.
What test did you use to compare the two types of nudges?
We used a One-way ANOVA to test for significant mean differences between social-norm and priming nudges. Both variables were continuous and derived from separate clusters of survey items, so ANOVA was ideal for comparing their perceived efficacy.