Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. Today
  3. Yesterday
  4. Last week
  5. It's quite personal because it depends on your strategy. Aiman did a video recently where I think he suggested 3 consecutive green months in sim though. However, in terms of win rate or certain R that depends on your strategy. For example Thor has a target of like 4R (although is mostly always higher) and his success rate is well above 70% I believe. Aiman has a lower success rate but his R per trade is quite a bit higher. In general terms there's two main things that will cause you to fail 1) a bad system 2) you don't follow your system (when it's good). Deal with the system first, trying to do both in live is what causes a lot of traders pain in the early days. Therefore you should be comfortably green but you should set that for yourself, like 50R per month 100R it's up to you. For me the biggest thing with going live too early is the psychology changes when real money is on the line so you really, really, really need to trust your system to help you not fall for the impulses of not adhering to it. If you don't trust it explicitly then you start stopping out early (or not at all and blow up), taking profit too quickly etc thereby the whole premise of your system falls over. The only way to do it is to prove it to yourself that it works. Imo it's one of the biggest reason new traders bleed money in the beginning, it takes a lot out permanently while other persevere through it and give themselves a chance to become successful. Martin
  6. Hi Martin, thanks a lot for your reply! I agree, this forum doesn't look to active any more. What would you consider a or some reasonable KPI to achieve in sim before going live? I think of something like getting green, a certain avg P&L / day, a certain win rate, a certain R rate or how successful my strategies are? Also thanks for pointing me into direction of the journaling video, will definetly check it out. Cheers Daniel
  7. Hello Has this script been released? I am looking for this one and will pay for it. I short stocks and I keep R fixed and add in the money + move my stop, 3 to 4 times. It would be amazing if i could execute with the hot key instead of doing the calculations in excel then manually entering. Thank you!
  8. Hey, Read this couple days ago and thought I'd give newer traders or someone who uses chartlog first chance to reach out. I don't think the forum is really active unfortunately. In terms of going live, don't rush it. Many traders just end up with a downward equity curve going live to quickly (me included). Test things out and prove to yourself you're profitable in sim first. I saw your post separately on chartlog, I don't use it personally so I can't really offer specific advice. There's a video in the education center about journaling so maybe give that a watch. I'm no longer a newbie but if you have any questions then fell free to reach out or the mentorships are also a good place to ask questions also. Martin
  9. Hello All, I was wondering if there is a key for the short side. Where i can incoroprate adds. For example Short side...Stop loss 10 dollars, Entry 9.50, Add at 9.25 move stop to 9.75, . Is there a key that allows one to do this for example multiple times? I usually move my stop down keeping R the same and my stop 3/4/5 times.
  10. @Abdul @Julien Maisan @Abdul Sultani Hello Gentleman, did anyone get this working? I trade similar and scale into my positions, it is horribly inefficient for me to enter my position then have to use excel to figure out my adds and new stops lol I usually add 4 or 5 time to my short position (primarily short bias). Would appreciate all and any help, and will pay to get this script.
  11. Hello, I am looking for ecxchange wih people using logging software, especially chartlog like I do, on how you utilize Tags / Custom Fields / Strategy Descriptions for your logging and improvement. I would be highly interested in all kind of setups you found useful (or not!) and ofcourse gladly share my aproaches so far. Looking forward to read from you Cheers Daniel
  12. Hi, not sure if anyone is reading this but I'll give it a ty :). My Name is Daniel, I am trading from Germany, Central Europe. I started Summer 22, read all books from Andrew and have a SetUp ready with DAS and BBT. I am simulating on a daily base since this year and wonder if I go real soon or if it's too early. Theroy-wise I am currently doing the BBT basic modules and am reading The Playbook from M. Bellafiore. My current missions are to establish a consistent set of logging, using Chartlog and to improve my strategies with a current focus on exiting / taking profit. I would be highly interested at any kind of contact ands exchange on these or other newbie topics, please reach out :). Looking forward to meet you all, Cheers Daniel
  13. Earlier
  14. if so, can you share how to set it up on a 2 min chart for intraday? I am currently reading Anchored VWAP by Brian Shannon Thanks
  15. Not sure if you were on the e-mail distribution. We are meeting at 4PM at Andrew's hotel. ALDEN Suite Hotel Splügenschloss Zurich Spluegenstrasse 2, Zürich, ZH, 8002 Switzerland
  16. I‘m in too…. looking forward to the meet up! Cheers
  17. Hi, would be great to join in Zurich -Can you confirm the meeting spot?
  18. Count us in for 2! Looking forward to meeting some BBT folks face to face. Just let us know the when and where. JenniferL
  19. Hi guys! I'm looking forward to meeting you guys on Sunday
  20. We have some people confimed for this meetup but they have not confirmed here, so this meetup is happening.
  21. Hi Guys, I recently moved to Tampa, FL from Toronto, Canada. Looking to connect with traders around Tampa, please message me if anyone is interested in meeting up and probably create a small support group of traders. Thanks Ravi
  22. We're having a Meetup with Andrew in Zurich on March 12th, 2023, 4:00 PM and you're invited! Venue: Aden Hotel Whether you're a pro or just starting out, this is a great opportunity to connect with other traders, learn from each other, and have some fun. So mark your calendars and stay tuned for updates!
  23. comments from BBT community: Alfredo 3/9/2023 @ 12:48 pm @Paras. It´s possible to have 2 different brokerage accounts in DAS? (IB + TDameritrade). My other question is if you know COBRA trading. Thanks trader0 3/9/2023 @ 12:50 pm @Alfredo Separate DAS apps for TDA, IB $150/mo for each Bandit 1/24/2023 12:21 pm @Paras thanks for inspiring me to move to DAS-TD and taking my coms away!! I trade the open and actually feel like I get better fills than IB, plus I've had shorts on several occasions where IB has not. So thank you:) MomoTrader 3/1/2023 3:13 pm @StephenMeowkmus @Hans I moved from DAS-IB to DAS-TD and am very happy and wish I had done it earlier. At the open, I do find a milli-second delay with DAS-TD, but not significant enough..I am a momentum trader and do quite a bit of scalping at the open..no big complaints. he zero commissions makes it really worth it. NVDA 3/1/2023 @ 3:22 pm @Hans late to this discussion but there are several studies out that show you et better fills with TD than IB. MomoTrader 3/1/2023 @ 3:33 pm @MasterChief https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4189239 MomoTrader 3/1/2023 @ 3:34 pm @Hans not arrguing against your experience..I myself felt a milli-second delay in fills with DAS-TD (vs IB), but seems the fill price seems to be better with TD per those studies.. MomoTrader 3/1/2023 @ 3:36 pm @Peter_laptop ...just to clarify, DAS is not DIRECT ACCESS Trading with IB & TD..Only Direct routing. They are Direct access with market only with native brokerrs using their API Hans Contributor 3/1/2023 @ 3:36 pm @Peter they were just showing that studies have shown that TD is better fills than IB. nobody was pushing one or the other, I have both Shelley 3/1/2023 @ 3:37 pm TD customer service is excellent. I was not impressed with IB's customer service. David76 3/2/2023 @ 11:27 am I don't think I understand how DAS is calculating trades. At the start of the morning I will position into 4 stock, but will has between 6 and 8 tickets. MomoTrader 3/2/2023 @ 11:30 am @David76 every transaction is counted as a ticket ..like if you short 500 shares and broker breaks it to two 250 share lots, thats two tickets... MomoTrader 3/2/2023 @ 11:31 am @David76 IB was notorious for such splits..and charged commission for every ticket..not seeing that too much with TD Lori S 3/3/2023 @ 3:18 pm DAS is $120 and then the data for TDA was going to be another $150 - I thought trader0 3/3/2023 @ 3:19 pm @Lori S No I have DAS-TDA - switched from DAS-IB - same cost for both trader0 3/3/2023 @ 3:18 pm @Lori S makes sense - DAS is $150 /mo for DAS-TDA and DAS-IB with TotalView, ARCA Lori S 3/3/2023 @ 3:19 pm ok so I would have the same cost, just less commissions... trader0 3/3/2023 @ 3:19 pm @Lori S yes DarkMoon 3/17/2023 @ 11:45 am I paid for both DAS-TD and DAS-IBKR for a month and ran some stats...after comms, TDA won out. Hans Contributor 3/17/2023 @ 11:39 am I strongly suggest using TD w/Das instead of IB. the fills are good now and the comm's savings is a life saver while you learn Hans Contributor 3/17/2023 @ 11:47 am @John I've had both and there is sometimes a slight delay getting filled on TD with das. There are independent tests that show better fills on TD but I don't think they are taking the delay into consideration. Your strategy many times requires very fast fills. Maybe TD isn't the best fit for you. But why not spend $150.00 on a month of Das with TD and see for yourself. If I had started with Das TD, I would have saved thousands in comms Hans Contributor 3/17/2023 @ 11:53 am @John_H a couple traders here in chat were insistent that I was wrong about TD Das because independent tests have shown best market order fills with TD. I traded them side by side and sometimes Das Td is delayed, its that simple bv 3/17/2023 @ 11:41 am TD ameritrade is not available to Canadians but TD investing is
  24. Hi guys, I live in Calgary, Alberta, Canada. I would like to create a little community (within BBT) based in Calgary to meet in person (from time to time ) to share our ideas and progress in the trade’s world. Guys , who are living in Calgary. Please contact me. Many thanks, Alla
  25. I wanna share my option chart with our community. Hope to see more charts from everyone https://drive.google.com/file/d/1iEyw4G-GVVkeacohQlu-kSX_7YAPu9r1/view
  26. I go to https://us.econoday.com/byweek.asp?cust=us everydays to check on market data and copy important events and paste them on my $spy chart to keep track of them. I make TEXT hotkey to draw the box on the chart
  27. Hey Justin, Is there any hotkey script that can do the same regardless if you are short or long, meaning, if I get long will sell 50%, and if I am short will buy 50% - this to reduce the number of hotkeys and not have separate one for long and short, not sure it is possible
  28. I asked Chat GPT how to do some stock trade analysis in R. My questions are in bold. Ignore the Markdown artifacts (```). **What statistical model should I use to analyze the best factors for a stock trade?** There are various statistical models that can be used to analyze the best factors for a stock trade. The choice of the model depends on the data available, the research question, and the level of complexity required. Here are a few commonly used models: 1. Linear Regression: Linear regression is a commonly used statistical model that can be used to explore the relationship between a dependent variable (e.g., stock price) and one or more independent variables (e.g., company financials, market trends, etc.). 2. Logistic Regression: Logistic regression can be used when the dependent variable is categorical (e.g., buy or sell). This model can help identify the factors that are most predictive of the desired outcome. 3. Time Series Analysis: Time series analysis can be used to study the patterns and trends in stock prices over time. This can help identify potential factors that influence stock prices, such as market trends or company news. 4. Decision Trees: Decision trees are a machine learning model that can be used to analyze the factors that influence stock prices. This model can help identify the most important factors and how they interact with each other. 5. Random Forest: Random forest is another machine learning model that can be used to analyze the factors that influence stock prices. This model can help identify the most important factors and how they interact with each other, while also providing a measure of the importance of each factor. It's important to note that the choice of the model depends on the specific research question and the available data. It's often a good idea to consult with a statistician or data scientist to determine the best model for your particular situation. **Would xgboost be good for that?** Yes, XGBoost (Extreme Gradient Boosting) is a machine learning algorithm that can be used for stock trading analysis. XGBoost is a powerful and flexible algorithm that has become popular in recent years due to its high accuracy and ability to handle large datasets. XGBoost is a type of ensemble learning algorithm, which means it combines multiple weak models (decision trees in this case) to create a stronger model. It does this by iteratively training and adding new trees to the model, with each new tree attempting to correct the errors of the previous trees. This allows XGBoost to learn complex relationships between the features and the target variable, which is useful in the context of stock trading analysis where there are often many factors that can influence stock prices. One advantage of using XGBoost for stock trading analysis is that it can handle both numerical and categorical data, which is often the case in finance. Additionally, XGBoost can provide insights into which features are most important for predicting stock prices, which can help traders identify the most important factors to consider when making trades. However, it's worth noting that XGBoost may not always be the best choice depending on the specific research question and available data. It's important to consider other statistical models and techniques before deciding on the best approach for your particular situation. **How many trades do I need to get a reliable model?** The number of trades required to build a reliable model depends on various factors, such as the complexity of the model, the quality and quantity of the data, and the level of accuracy required. Generally speaking, the more data you have, the better the model is likely to be. However, it's important to note that the quality of the data is more important than the quantity of data. In other words, it's better to have a smaller amount of high-quality data than a larger amount of low-quality data. The quality of the data can be affected by various factors such as data cleaning, feature selection, and data preprocessing techniques. In addition, it's important to consider the level of accuracy required for your model. A higher level of accuracy may require a larger number of trades to build a reliable model. There is no specific number of trades that is required to build a reliable model, as it depends on the specific research question and the available data. It's important to use statistical methods to evaluate the performance of the model, such as cross-validation, to ensure that the model is reliable and accurate. **What R code would I use to analyze trades by best times, various indicator statuses, and entry signals?** To analyze trades by best times, various indicator statuses, and entry signals in R, you can use various packages and functions depending on the specific analysis you want to perform. Here's an example of how you can use the `tidyverse` package to load and analyze your data: 1. Load the `tidyverse` package and read in your data: ``` library(tidyverse) trades_data <- read_csv("path/to/your/data.csv") ``` 2. Clean and prepare your data by filtering out any unnecessary columns and formatting the date and time variables: ``` trades_data <- trades_data %>% select(date, time, entry_signal, indicator_status, profit_loss) %>% mutate(datetime = lubridate::ymd_hms(paste(date, time)), entry_signal = as.factor(entry_signal), indicator_status = as.factor(indicator_status)) ``` 3. Group your data by the variables you want to analyze and calculate summary statistics, such as the average profit/loss for each group: ``` trades_summary <- trades_data %>% group_by(hour(datetime), entry_signal, indicator_status) %>% summarize(avg_profit_loss = mean(profit_loss), total_trades = n()) ``` In this example, we group the data by the hour of the trade, the entry signal, and the indicator status. We then calculate the average profit/loss and the total number of trades for each group. 4. Visualize the results using various graphs, such as line charts or bar charts: ``` ggplot(trades_summary, aes(x = hour, y = avg_profit_loss, fill = entry_signal)) + geom_bar(stat = "identity", position = "dodge") + facet_wrap(~ indicator_status, scales = "free_y") + labs(x = "Hour of Trade", y = "Average Profit/Loss", fill = "Entry Signal") ``` In this example, we use `ggplot` to create a bar chart that shows the average profit/loss for each hour of the trade, grouped by the entry signal and the indicator status. We also use the `facet_wrap` function to create separate charts for each indicator status. These are just a few examples of the types of analysis you can perform in R to analyze trades by best times, various indicator statuses, and entry signals. The specific code you use will depend on your specific research question and the data you have available. What additional R code would I use to run decision tree, random forest, and xgboost models? To run decision tree, random forest, and XGBoost models in R, you can use the following packages and functions: 1. Load the necessary packages: ``` library(caret) # for building and evaluating models library(rpart) # for decision tree models library(randomForest) # for random forest models library(xgboost) # for XGBoost models ``` 2. Split your data into training and testing sets using the `createDataPartition` function from the `caret` package: ``` set.seed(123) # for reproducibility index <- createDataPartition(trades_data$profit_loss, p = 0.8, list = FALSE) train_data <- trades_data[index, ] test_data <- trades_data[-index, ] ``` In this example, we randomly split the data into training and testing sets using an 80/20 split. 3. Build a decision tree model using the `rpart` function: ``` decision_tree_model <- rpart(profit_loss ~ ., data = train_data) ``` In this example, we use the `rpart` function to build a decision tree model that predicts profit/loss based on all other variables in the `train_data` dataset. 4. Build a random forest model using the `randomForest` function: ``` random_forest_model <- randomForest(profit_loss ~ ., data = train_data) ``` In this example, we use the `randomForest` function to build a random forest model that predicts profit/loss based on all other variables in the `train_data` dataset. 5. Build an XGBoost model using the `xgboost` function: ``` xgboost_model <- xgboost(data = as.matrix(train_data[, -5]), label = train_data$profit_loss, nrounds = 100, objective = "reg:squarederror") ``` In this example, we use the `xgboost` function to build an XGBoost model that predicts profit/loss based on all other variables in the `train_data` dataset. We also specify the number of rounds (`nrounds`) and the objective function (`objective`) for the model. 6. Evaluate the models using the `predict` and `RMSE` functions from the `caret` package:``` decision_tree_predictions <- predict(decision_tree_model, test_data) decision_tree_rmse <- RMSE(decision_tree_predictions, test_data$profit_loss) random_forest_predictions <- predict(random_forest_model, test_data) random_forest_rmse <- RMSE(random_forest_predictions, test_data$profit_loss) xgboost_predictions <- predict(xgboost_model, as.matrix(test_data[, -5])) xgboost_rmse <- RMSE(xgboost_predictions, test_data$profit_loss) ``` In this example, we use the `predict` function to generate predictions for the test data using each model, and the `RMSE` function to calculate the root mean squared error between the predicted and actual profit/loss values. These are just a few examples of how you can run decision tree, random forest, and XGBoost models in R. The specific code you use will depend on your specific research question and the data you have available. Additionally, it's important to properly tune and validate your models to ensure they are accurate and reliable. **How do I evaluate and extract the best factors from those models?** To evaluate and extract the best factors from your models, you can use various techniques depending on the type of model and your specific research question. Here are a few examples: 1. Decision Tree: Decision trees are relatively simple models that can be easily visualized, making it easy to see which factors are most important for predicting the target variable. You can visualize the decision tree using the `rpart.plot` package in R, which will show you the most important variables and the cutoff points for each variable in the tree. ``` library(rpart.plot) rpart.plot(decision_tree_model, type = 0, extra = 102, under = TRUE, varlen = 0, box.palette = "Blues") ``` In this example, we use the `rpart.plot` function to visualize the decision tree model. The `type = 0` argument specifies that we want to see the actual tree structure rather than a summary of the variables used, and the `extra = 102` argument specifies that we want to display the node numbers in the plot. 2. Random Forest: Random forests are more complex models than decision trees, so it can be more challenging to extract the most important variables. However, the `randomForest` package in R includes a function called `importance` that can be used to rank the variables based on their importance in the model. ``` rf_importance <- importance(random_forest_model) rf_importance <- data.frame(variable = rownames(rf_importance), importance = rf_importance[, "MeanDecreaseAccuracy"]) rf_importance <- rf_importance[order(-rf_importance$importance), ] ``` In this example, we use the `importance` function to calculate the importance of each variable in the random forest model. We then convert the results to a data frame, sort the variables by importance, and display the results. 3. XGBoost: XGBoost models are also complex, but there are several techniques you can use to evaluate and extract the most important variables. One common approach is to use the `xgb.importance` function from the `xgboost` package to calculate the importance of each variable. ``` xgboost_importance <- xgb.importance(names(xgboost_model$feature_names), model = xgboost_model) xgboost_importance <- data.frame(variable = names(xgboost_importance), importance = xgboost_importance) xgboost_importance <- xgboost_importance[order(-xgboost_importance$importance), ] ``` In this example, we use the `xgb.importance` function to calculate the importance of each variable in the XGBoost model. We then convert the results to a data frame, sort the variables by importance, and display the results. These are just a few examples of how you can evaluate and extract the most important variables from your models. Other techniques may be more appropriate depending on your specific research question and the type of model you are using. It's important to keep in mind that the most important variables may change depending on the model and the data used, so it's important to evaluate the models thoroughly and validate the results.
  1. Load more activity
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.