Are there any rates published around records per second for data 360 ingestion?
I haven't found anything, and the only relevant note may be the disclaimer metnioined on the real-time ingestion docs:
Actual performance can vary based on your source system, network conditions, data volume, transformation logic, and real-time data graph configuration. Real-time ingestion supports multiple channels, including Web SDK, Mobile SDK, and Server-to-Server (S2S) ingestion.
When a data stream is on upsert, it’s best not to exceed a 50-GB aggregate across all files in the single run. This number is strongly influenced by the number of distinct data dates present in the table, the overall size of the table, and several other factors such as the number of columns. As these variable inputs increase you may experience performance degradation.
For larger target tables and larger upsert ingestions, we strongly recommend using engagement tables and selecting an engagement date field that contains many distinct dates. Not using engagement nor selecting a high cardinality date field may cause performance degradation.
Data size is the sum of all file sizes for a single datastream job. When a data stream is set for full refresh, it’s best not to exceed 1000 GB for all files in a single run. This number is strongly influenced by the number of distinct data dates present in the table, the overall size of the table, and several other factors such as the number of columns. As these input variables increase you may experience performance degradation.