The Subspace GMM acoustic model has both globally shared parameters and parameters specific to acoustic states, and this makes it possible to do various kinds of tying. In the past we have investigated sharing the global parameters among systems with distinct acoustic states; this can be useful in a multilingual setting. In the current paper we investigate the reverse idea: to have different global parameters for different acoustic conditions (gender, in this case) while sharing the acoustic-state-specific parameters. We experiment with modeling gender dependency in this way, and show Word Error Rate improvements on a range of tasks and comparable results to the Vocal Tract Length Normalization (VTLN)-like technique Exponential Transform (ET).