We propose a framework for quantifying trustworthiness of natural language explanations, balancing plausibility and faithfulness to derive a Language Explanation Trustworthiness Score (LExT). We apply the framework in healthcare settings and compare general-purpose and domain-specific models.