When using history with a list and '1m', dropna drops the time series entry for every stock for every minute where any single stock had a missing price (nan, meaning not-a-number). It is the scorched-earth approach. This may explain the unexpected results you're seeing. No stock would be any more populated than the worst, most thinly traded. If there is a stock in the list that didn't trade for 40 minutes, those 40 minutes would suddenly disappear for all stocks in the list.
For that history call, try forward and back fill, .ffill().bfill() instead. Forward fill says basically, if there is a minute with nan (no trades), just fill it with the previous minute price, assuming the price didn't change, and it will operate on each stock individually. Back fill says, if the window for that security started out with nan's, fill them with the first known price. Forward must be done first, otherwise there would be prices from the future placed in the past, look-ahead bias. bfill is used at all only as an act of desperation for the start of look-back windows to avoid even worse results for thinly traded stocks.
prices_1m = data.history(context.stocks, 'close', context.rsi_lookback*context.timeframe, '1m').ffill().bfill()
On the other hand, dropna is not always bad. It depends on the way the pandas object is structured. For a single stock, dropna is ok. Coming in from pipeline, with its different structure, dropna will lose only the security where any of its own values for the pipe's added columns are nan so it can be beneficial there. Someone will correct me if I'm wrong. This is fine, for pipeline:
context.stocks = pipeline_output('pipe_name').dropna()
With history '1d' it isn't an issue unless I'm mistaken as it would be odd for any stock to have ever had a day where it never traded, where all minutes were nan and thus the '1d' price not populated.
For the wider audience, why all this concern about nan's?
NaN values make it through comparison checks for example, equal-not-equal or greater-than, and survive math without error. They produce false results, and ta-lib usually (or always) will not complain if there aren't too many, and the more there are, the less accurate the resulting figure. It seems to me Quantopian might have the best data available so ffill and bfill would be the closest one can come for RSI etc. One might want to ask themselves, if using ffill and bfill and backtest RSI or other ta-lib operations don't match some site out there, are those sites the gold standard or is the data here actually better and these RSI values more accurate then.