mikez
Members+-
Content Count
3 -
Joined
-
Last visited
-
Days Won
1
mikez last won the day on July 26 2020
mikez had the most liked content!
Community Reputation
3 NeutralRecent Profile Visitors
1,572 profile views
-
Hey everyone, I'm based in Sydney. We should do a meetup here once the COVID restrictions ease! Mike
-
I called Interactive Brokers yesterday and they said it's possible to get margin up to A$25k. It requires a credit check though. Have you tried CMEG - I'm planning to give them a go.
-
Hey BBT community, I wanted to share this solution for slow connections on the forum here as it's been an absolute life saver for my day trading using DAS Trader Pro. This solution requires some technical knowledge. This method works well for those of us who live outside of North America, far from the data centres in New Jersey. I trade from Sydney, Australia - there's no major city physically further from New Jersey than Sydney. (As a side note, this may also work if you have a slow computer.) Problem I get consistently get terrible network latency (ping) during the first 30 mins of the market open - to the extent DAS is not usable for about 30 mins. I would get delayed price updates 10sec (sometimes 30sec) later. The image below shows the 85sec delay on one of the bad days. The issue becomes exponentially worse the more charts and montages I have open; this is likely due to L2 data. I only trade the open, so it's quite frustrating to lose the first 30 mins due to internet issues. My Tested Solution My solution was to run DAS Trader from a windows desktop virtual machine in the cloud at a location near the NYC data centres. I would then remote desktop access into that machine. The following steps are for Google Cloud Platform (you can also use Amazon Web Services or Microsoft Azure, but I wouldn't stray from those three). Step 1) Sign up for an account with Google Cloud. Set up a virtual machine on Google Cloud. This is a youtube video showing how to set one up. Choose a beefy instance if possible; I'm using 8 CPU and 16G of RAM. Choose Windows Server Datacentre (with desktop experience) as the machine image. These machines are charged on a time-usage basis, so turn it off when you're not trading! Step 2) Set up your Windows Remote Desktop client such that the Colour setting is set to 15-bit and the Experience setting is set to 'Modem'. This will significantly reduce the bandwidth usage. Step 3) Install DAS on the virtual machine and test. The expected result is that you should experience lower latency during periods of high data flow. I don't have the bandwidth numbers but assume they are correlated for our purposes. Now, during the market open: i) the ping from my virtual machine to the exchange is 20ms ii) the ping from my home to the virtual machine is about 200ms iii) in total is much lower at about 220ms The primary trade-off is that the desktop experience is not as visually snappy. But I'll take that in a heartbeat over what I had before. Why the Solution Works This is an explanation of what's happening behind the scenes. Before: 1) The quote data was sent from New Jersey Data Centre to my laptop in Sydney. 2) My laptop would run DAS and render the charts and L2 stock data. After: 1) The quote data is now sent from New Jersey Data Centre to my virtual machine in North Virginia. 2) My virtual machine running DAS will render the charts and L2 data. 3) My remote desktop client would then connect to the virtual machine and fetch the 'video-feed'. However it's quite efficient, it will only fetch necessary on-screen changes. This is similar to watching a youtube video - but we're watching a computer desktop feed. Not every pixel on-screen is refreshed, only a fraction of the pixels are refreshed few times a second. Only the refreshed pixels are sent over the internet and repainted on my monitor at home. This efficiency saves a lot of bandwidth. As a result: With the above solution, we have minimised the amount of data that needs to physically travel hence improving our experience when using DAS. The remote desktop client screen-feed transfers much less data than raw stock data. It's difficult to verify this but I empirically suspect the bottleneck is due to the constraints relating to the underwater fibre optic cables between LA and Sydney. The reason I believe this is the case is because I had tested running a VM in the AWS Sydney region and the same issue occurred; hence my woes were unlikely to be issues with my home internet connection. A map of the submarine data cable network (as at 2015): Hope this is useful.