For anyone who might be interested, I created a simple version of a TLA+ REPL using Python and the TLC model checker, that provides an interactive way to evaluate TLA+ expressions. Right now, it is a bit slow, since it starts up a new TLC instance to evaluate every new _expression_, but I find that even with a lag of ~1 second per _expression_ evaluation, the speed of the feedback loop for testing out TLA+ expressions is much better than what I can do with the existing Toolbox features. Also, it does not support "persistence" yet. That is, every _expression_ is evaluated in complete isolation, so you cannot define things and use them later on in a REPL sessions.
Even with these caveats, I have found it to be quite a useful little tool for experimentation and debugging. Often I find myself forgetting a subtle nuance of how to express something and it would be nice to have a quick way to test things out. If anyone has feedback or suggestions, I would welcome them. I hope that this tool might of use to others who are learning TLA+ or writing larger specifications and want something to aid their development workflow.
You can clone the REPL from the Github link below. It only requires Python to run. It includes an automated setup script that will download the TLA+ tools for you if you don't already have them installed.