Briefly, the SentinelClient supplied by sentinel.go represents a connection to a bundle of sentinels. While only one connection is active at a time, the SentinelClient will failover its internal connection to all configured sentinels before failing any individual operation.
The SentinelClient has a Dial method, which connects to the sentinel, and DialMaster and DialSlave methods, which connect to the named master or slaves of the named master, respectively.
The SentinelAwarePool supplied in pool.go is extremely simple. I wanted to avoid an overly-complex implementation here, because I don't have a lot of operational experience with the Pool. The only differences are the addition of a TestOnReturn entry point, which is designed to test returned connections for role changes, and a method to update the internal accounting of the master's address. (Role detection and test wrappers are supplied in sentinel.go). The only meaningful operational difference to the SentinelAwarePool is the ability to purge all idle connections if the master's configuration changes. (active connections will be handled by TestOnReturn).
An example of usage of the SentinelAwarePool is supplied in pool.go.
I believe I have supplied sufficient capability here for a barebones but fully capable sentinel configuration, and enough tools and flexibility for a user to build a more complex setup (including using Sentinel pubsub, which is not included here.)
I've tested this (both pool and standalone connections) on a new 2.8 cluster and an older 2.6-era cluster which does not have CLIENT kill. I would appreciate additional testing and validation if possible, especially on the new 3.0 cluster which I do not have access to.