Search
× Search
Wednesday, April 24, 2024

Archived Discussions

Recent member discussions

The Algorithmic Traders' Association prides itself on providing a forum for the publication and dissemination of its members' white papers, research, reflections, works in progress, and other contributions. Please Note that archive searches and some of our members' publications are reserved for members only, so please log in or sign up to gain the most from our members' contributions.

Renting CPU power to run our models

photo

 Cameron Wild, Portfolio Manager

 Friday, August 15, 2014

When I have a new idea it might take me about a month to design and code the new model. The next step is to brute force optimise it and unfortunately that takes a whole lot of CPU resources. To this end I have been using Amazon AWS EC2 for the last year and I'm pretty happy with them. But there's always been this very vague explanation about what hardware/performance you get for your money. Until recently that is, when they finally started telling us exactly what Intel chip and how many vCPUs you get for each instance. They also say explicitly that "Each vCPU is a hyperthreaded core." Unfortunately that appears to be completely untrue. I originally wrote this post to ask the question: What exactly does AWS mean when they say "Virtual CPU". But then I stumbled upon this excellent blog that really gets to the heart of it. pythian.com/blog/virtual-cpus-with-amazon-web-services The key conclusions are: Amazon says, "Each vCPU is a hyperthreaded core." The blog tests show, "A vCPU is most definitely not a core. It’s a half of a shared core, or more exactly, one hyperthread." So if you’re looking for the equivalent compute capacity of a hyperthreaded 8-core server you would need to purchase 16 vCPUs. Let me be generous here and say that AWS have made an innocent typo on their webpage. What they meant to say is that, "Each vCPU is a hyperthread OF A core." Has anyone out there had any experience with the competitors of AWS EC2? Did you find them to be better value for money? And are they safe, solid and trustworthy? Thank you for your responses.


Print

5 comments on article "Renting CPU power to run our models"

photo

 Bob Bolotin, Developer of "PowerST: The Power System Tester"

 Wednesday, September 3, 2014



Nikolai P: Your pointer to Joe’s Datacenter got me into looking at a third possibility (other than AWS or buying a server).

Joe's rents dedicated servers for very cheap. You can rent a dedicated server (running Linux, $15/mo more for Windows) for $35 to $100 per month. If 2 GB Ram is enough for what you are doing you can get an (unspecified processor type) Dual Processor Intel Xeon 3Ghz for $50/mo. For $75/mo you can get 8 GB Ram with a Dual Processor Intel Xeon Quad-Core L5420 2.5Ghz, which from google searches seems to be just a notch below the server that you pointed to for sale on eBay.

Above you mention AWS c3.4large instance with 16 vCPUs for $0.8/hr costs > $500/month at 24/7 usage. You say that the server you pointed to on Ebay is 30% slower. Then from my google searches the $75/mo server at Joe's is a little slower than the eBay server, so let's say it is half the speed of the AWS configuration that you point to.

If my assumptions are correct, that is $75/mo for half the speed of what would cost more than $500/mo on AWS. This would seem to be vastly cheaper. It's $75/mo for computing power that would cost more than $250 on AWS.

But this is comparing to a full month AWS 24/7. If you only need a few hours or a week or two weeks, or your use of AWS is sporadic, then the pricing comparison changes (same thing I said in my previous posts).

I suspect that renting a server month to month could be a good tradeoff for the type of strategy research we are discussing. If you want to crunch numbers for a few weeks or a month or a few months, then renting a dedicated server sounds like it could be attractive versus AWS.

Cameron W who started this topic said above "when I complete an optimisation run I might well trade that model for the next six months, or 2 years - who knows". That matches my experience, that the need for massive computing is a step between designing a strategy and trading the strategy. That the use for rented CPU is for a period of time, then interest shifts to something else.

This points to rent rather than buy. It is interesting the idea to purchase a previous generation server on eBay, but then there are ongoing costs. Nikolai P said then you pay $50/month to host it at Joe’s Datacenter or a $150 increase in your electric bill to run it at home. When you can rent almost as much at Joe’s Datacenter for $75/month and cancel the service when the research project is finished.

I suppose my conclusion is that if you really want something that you can run 24/7 month after month then buying a server and putting it into a data center may be attractive. If you want to crunch numbers for a month or a few months, then renting a dedicated server is attractive. If your need is sporadic and/or shorter term, then AWS is attractive.

Comments? Do my conclusions make sense?

Either way, the bigger picture is that there are great options available these days to make calculation intensive strategy research practical from both a convenience and cost standpoint. Whether for calculation intensive numerical computations, stepping options, or some form of Monte Carlo testing. It opens up possibilities.


photo

 Arnaud Vincent, Human computation, Big Data & Machine Learning

 Thursday, September 4, 2014



@Cameron we use GPU (CUDA/NVIDIA) for brute force computing on our own computers, sometimes we achieve to multiply computing power per 20 or 30. But you'll probably have to recode your algo to use all power of GPU calculation... and you never know exactly how much you will earn (sometimes nothing....)


photo

 Nikolai P., Open to Projects / Opportunities. Super Fast Delivery.

 Thursday, September 4, 2014



@Arnaud - I need to learn how to utilize CUDA processors. Sounds interesting.

@Cameron - Xeon 3GHz is equivalent to Pentium 4D. You definitely want to stay away from that. Such 2 core server would perform at least 10 times worse computationally than L5520.

L5420 is an equivalent to Core 2 Quad. While L5520 is equivalent to Core i7. If you use virtualization, you L5520 would be much better since it added some special memory tables to the CPU to make virtualization faster. If you don't use virtualization, then L5420 will be only 10-15% slower, unless your code takes advantage of hyperthreading or new instructions that came with I architecture. Then it's at least twice slower.

When I was buying my server, I also saw L5420 servers on eBay for only $180, and it was very tempting until I realized the advantage of L5520 for virtualization.


photo

 Guy Marcelis, Consultant, Investor, Entrepreneur

 Thursday, September 4, 2014



@Nikolai: Cuda cards can be found at interesting prices on ebay too ;) 2 of those in a decent Xeon give you baby cray processing power, but (as I mentioned earlier, or did I?) and as Arnaud stated as well, sometimes the enabling Cuda in your code doesn't bring anything (and it's tedious)


photo

 Nikolai P., Open to Projects / Opportunities. Super Fast Delivery.

 Thursday, September 4, 2014



I think it's more useful for image processing (i.e. OpenCV) than for trading. Even though I did not try that, from what I understood, CUDA channels can only significantly improve simple matrix based math processing, which is exactly what image processing needs.

The weird part is, every time I wrote code for daytrading it was mostly business logic that had to figure out what to buy, what to sell and make sure results are more consistent between back-test and real-time. And that was much different than just processing numbers, which CUDA is good at.

Please login or register to post comments.

TRADING FUTURES AND OPTIONS INVOLVES SUBSTANTIAL RISK OF LOSS AND IS NOT SUITABLE FOR ALL INVESTORS
Terms Of UsePrivacy StatementCopyright 2018 Algorithmic Traders Association