Home › Forums › General Discussions › Thread for (stupid) questions
This topic contains 15 replies, has 6 voices, and was last updated by despacito 1 year, 8 months ago.

AuthorPosts

From time to time a read something that gets my absolute attention. Then I think about it over and over again – for days, weeks, moths, and even years – without getting some answers to my questions on that topic. However, in these cases I’d like to discuss about the ideas/theories/topics and hear the opinions of other people.
Thus, this thread is dedicated to everybody’s (maybe sometimes stupid) questions which are thought to don’t be worth opening a new thread.
You are welcomed to ask your questions here and to contribute and discuss your opinions with other members as long as the they are related to trading and the markets.
… and here is my question …
Before two or three years I’ve read the first time a rhetorical question:
Have you ever wonder, when you have a line graph, why in the hell would you invent the candlestick!!! ?
Clever Japanese people, they never did tell you why they invented it in the first place. Decades had past and then candlestick patterns were published… but this is not as powerful as the main core concept of the simple candlestickEven two threads have been opened in order to decipher some hints of CrucialPoin, “About Candlesticks: Why they were created? What do they reveal?” and “Higher Edge within a Single Candlestick“.
In the first place I’ve thought that candlesticks have been developed in order to see either an acceleration of prize more easily or to get anchored in the open prize (just like a primitive version of a MA. However, “why in the hell would you invent the candlestick!!! ?” is a great question. During the time of the invention of the candlesticks (18th century) line graphs showing prizes may be thought as an archaic version of tick charts. During the day prizes may rise or fall and the line graph of intraday prizes should show more details than just a daily candlestick. Thus, the anchoring in time or estimation of acceleration would have been much more consistent if one would intersect the day in smaller increments and use these smaller increments … Thus, my former explanations seem to be not enough.
Maybe there is even more, let’s see:
Let the candle wicks light the path…
[JoeyNY]
(Maybe this has only to do with common candlestick formations. Or: we have to measure the prize behavior within candlesticks’ lifetime before it closes in order to get a high probable direction of future prize path?!)
More recently I read this one:
CandleStick is The Most important Part of analysing Market
There are Big secrets in CandleSticksThe later talks a lot about gaps in prizes that can help to identify times and prize levels of reversals. This seems to be a bit related to the original idea behind the ‘similarity system’ thread. However. AmirShahiN also talks about the analysis of smaller TF candlesticks as well as the analysis of the latest candle from the current TF.
Now the ball is in your court! What are your opinions?
 Do you think there is a higher edge within a single candlestick (although the separation of prizes into arbitrary time frames)?
 Why would you invent the candlestick chart although you still have a (probable) more resoluted line chart?
 Do you see any predictive pattern in a mathematical formula using the HLOC prizes of one candlesticks (or more candlesticks of a lower TF)?
 This reply was modified 2 years ago by Anti.
First of all, I do not think that there’s such a thing like a ‘stupid’ question. If someone does not know a certain fact, or does not understand every detail of somebody else’s statement, or just is a newbie in a certain field, or … (fill in whatever makes sense!), he or she should feel free to post a question hoping the community can help out! I believe we should run this forum in exactly this spirit, distinguishing it from certain other FX forums where newbies sometimes are teared down, even threatend to be banned, just for asking questions … even by a forum owner. (And no: I don’t have FF in mind this time )
@anti: In this spirit, I really do appreciate you’ve opened this thread!
My opinion regarding candlesticks vs. line chart: when line and candle chart are showing the same timeframe, the candlestick view has the advantage of visualizing some very basic statistics about the prices within the chosen fragmentation of the time dimension: the extreme highs and lows of prices, alongside with open and close prices, as compared to close prices alone.
A good trader is a realist who wants to grab a chunk from the body of a trend, leaving top and bottomfishing to people on an ego trip. (Dr. Alexander Elder)
If the condition is that both charts should represent the same TF then your explanation seems to be most obvious. But what if not? For instance, nonMT4 platforms usually support tick charts as well. Tickers from the stock exchange don’t show candles at all (although I don’t really know what all the numbers represent …):
Is there any advantage in using candlesticks rather than tick charts alone?
Ok, when looking at tick data, we do not look at any timeframe, just incoming prices. OHLC, or candlesticks, can only be constructed when we start to structure tick data according to rules we define. Most obvious rule is simple timing, i.e. a uniform timeframe.
Most traders choose one or more timeframes for their trading that are somehow related to the time horizon (average holding period) of their positions. If you plan to hold a position for days or weeks, looking at M1 or M5 might be of minor importance, just serve to potentially optimize your entry.
The uniformity of candlesticks relating to the time dimension provides an easy way to evaluate price movements visually – in retrospect. Raw ticks cannot provide that.
On the other hand, ticks can provide a different measure, the density of ticks in ticks per minute. Candlesticks will filter that part of market information out, while summarized tick density relating to the timeframe we’re looking at will be represented by volume.
So I think that candlesticks including volume are just a good way to summarize raw tick data relating to the timeframes that are relevant to our specific trading approach.
Tick data and candlesticks are just different views on the very same subject: prices. When doing biological or geological research, a scientist will also have to make a decision whether a helicopter view or a microscopical view makes more sense related to the question at hand, right?
Just my 2 cents …
A good trader is a realist who wants to grab a chunk from the body of a trend, leaving top and bottomfishing to people on an ego trip. (Dr. Alexander Elder)
interesting topic.
i would say, for me personally, candlesticks give me the positive effect of seeing ohlc of for example : weeks, days, quarters , easy and quick.
i think these levels are important. Price (or the traders) react at those levels quite often.
i ve puzzled some time with the statement that there´s a mathematical edge in ohlc of candlesticks..just with no result for me.
"A dream you dream alone is only a dream. A dream you dream together is a reality." (John Lennon)
@bartleby: Thanks for participating! Well, especially the mathematical edge got my attention. But I’ve also never found something promising.
Hmm, I can’t explain to myself how candlesticks can provide any mathematical edge since they contain less information compared to tick charts. In this point, I agree with Simplex: candlesticks might be seen as a sampling of the underlying tick flow in either a statistical or signal processing sense. From the latter point of view, it is important to note that the assumption of uniform sampling times is convenient (and necessary for most indicators), but actually not valid in terms of the underlying data. Any densitiy or velocity information of incoming ticks is discarded in that case, which is a valuable and predictive information in my optinion. However, I’ve never been able to trade this, simply due to trading latencies.
Whereas candle highs and lows encode some sort of price span (variance) per sample, I think that for any continuous (non closing) markets and artificial sampling intervals (e.g. < daily), the open and close prices are technically somewhat arbitrary since they depend on the exact time of the first/last incoming ticks. So the essential question might be if there is some edge that arises from the psychological component, i.e. the traders’ (over)interpretation of candles and the patterns they form.
@anti: Hello again. I know, it’s been a long time…
 This reply was modified 2 years ago by flx23.
@flx23: Well, so I believe that we all don’t see any edge.
However. Another thing that rankles me is how semiprofessional traders trades before computers and fast datafeeds arose. I mean, how did successful traders which didn’t work at the stock exchange or a bank directly decided when to trade and/or when to get out of a trade. So, when did those traders called their account managers and gave orders via their phone?
Thank you for creating this thread Anti. One question : If we shift MA indicator back for some bars, lets say a half of its period, we will get empty lines at the last half period candles. As i try to modify an MA indicator in order to make it like the shift one, and i try to fill the empty place with some previous candles data, it is now becomes repaint. I think the right one will not repaint ( i could be wrong thought … , but i am sure the great coder there never creates a repaint indicator). Any idea ? Thanks.
If I get your idea, I think there is no way to make the values of the first x/2 bars nonrepaint because no indicator will know the future values …
If I get your idea, I think there is no way to make the values of the first x/2 bars nonrepaint because no indicator will know the future values …
Thanks Anti. Now come a beginner question : How to compile the MQH include/library file to EX4 format (like we do to compile the common source code MQ4 to EX4) ?? After compiling it, can we use it like usual we call the include file at the top of our indicator ? Example :
#include “mylib.mqh”
In this case, our library/include MQH file is not in the source code format any more ….
Is it possible ??=====
EDIT : I found another solution. Instead of using #include to call the MQH file (located in Includes folder), FerruFx @FF said that we can use #import to call function in the EX4 file which located in Library folder. So, we just need to create functions in MQL4 like usual, put it in Library folder, it will be automatically compiled to EX4 format, and we just use #import statement from our indicator to call the function. It is simple actually …
Hola … buenos dias,
I see a good thread here : https://penguintraders.com/forums/topic/forthefriendsofcultivatedfiltering/ , i think it has relationship to my problem, but there is some different than the one i will ask. Lets say i want to do a prediction (aka extrapolation), how to do it ? If i read this post : http://math.tutorvista.com/calculus/extrapolation.html , the formula is given by :y(x)=y1 + (xx1) (y2−y1) / (x2 – x1)
Where the two end points of linear graph are : (x1, y1) and (x2,y2) , and the value of the point x is to be extrapolated…
If i am not mistaken, it is only for linear extrapolation with 2 points. If we apply this formula to our charts, and take just 2 last points (lets say 2 last closing prices), it will not represent the whole picture. How about using prediction model as described here ? https://en.wikipedia.org/wiki/Linear_prediction
But … how many points we need for the value “p” (for i=1 to i=p) ? And how to calculate the predictor coefficients ? I don’t think that we need Kalman filter here, right ? Any idea ? Gracias
 This reply was modified 1 year, 8 months ago by despacito.
Hola,
my thread addresses indeed the same simple mathematical problem: linear prediction or linear filtering, however you want to call it. The formula you mentioned above leads to the class of finite impulse response (FIR) filters after solving the underlying optimization problem using whatever optimization criterion. Solving the problem, i.e. calculating the coefficients is easy, however, the question is: what is a good optimization criterion? A common choice is the mean squared error (MSE) between a true sample time series and that one produced by your predictive filter (model) which is to be optimized.
Anyway, the problem with all these approaches is that you train your model using historical data and you might end up with a solution that works perfectly for your training data but fails miserably when applied to new and still unseen data. This problem is well known as overfitting. In order to reduce it you usually train your model using several historical data sets and always validate that the current model still performs well on data outside your training sample (crossvalidation). If you’re lucky, you end up with a filter that has some predictive power w.r.t the time series it was trained with. At least as long as the “characteristics” of the time series don’t change to much in near future… which is a quite optimistic assumption. Typically, you would want to retrain your model from time to time as its predictive power decreases. So far, so good.
Another probem using suchlike models/filters is their lagging nature. To put it simply, the prediction will always come (too) late because your model relies on a data window of the recent N prices where the most important ones, i.e. those with the highest “predictiveness” are of course the very most recent prices. There is always a tradeoff between good predictiveness and small lags, i.e. “timeliness”. You cannot have both. At least not using the classical mean square criterion…
What I mentioned in my thread is basically an alternative optimization criterion for finding model coefficients. It is a variant of the MSE where you can weight some components of the error individually. These components are: accurateness, smoothness and timeliness. For finding local lows and highs in trading you don’t need to predict the absolute value of a future price. The direction (up/down) is just fine. So, instead of accurateness, you favor a timely and smooth prediction curve. Timeliness is mandatory for placing a trade and smoothness (noise suppression) is a very preferable aspect of prediction reliability.
Regarding the Kalman filter: that is just another very common approach. You don’t have to use it for solving this kind of problems but you could. Kalman filters are a perfect choice for a lot of problems in engineering where you basically know your (physical) model and mainly have to deal with observation noises. In the domain of trading we usually have also no idea about the model/process itself and hence cannot clearly distiguish (not even statistically) between signal/process characteristics and noises.
Addition regarding p: this is a hyperparemeter you have to define in advance before learning the parameters and will influence the depth of considered data history (memory/model size). A complex model can capture complex signal characteristics but is also more prone to overfitting compared to smaller model sizes. Generally, you want p to be as small as possible while still maintaining predictive power. So, p itself might be subject to an optimization process.
Hola, … you train your model using historical data and you might end up with a solution that works perfectly for your training data but fails miserably when applied to new and still unseen data. This problem is well known as over fitting. Typically, you would want to retrain your model from time to time as its predictive power decreases. So far, so good. Another probem using suchlike models/filters is their lagging nature. What I mentioned in my thread is basically an alternative optimization criterion for finding model coefficients. It is a variant of the MSE where you can weight some components of the error individually. …
Addition regarding p: this is a hyperparemeter you have to define in advance before learning the parameters and will influence the depth of considered data history (memory/model size). A complex model can capture complex signal characteristics but is also more prone to overfitting compared to smaller model sizes. Generally, you want p to be as small as possible while still maintaining predictive power. So, p itself might be subject to an optimization process.
These are great information Flx23 . i though before, that p must be much much bigger than the normal period we take (around 1.5x to 10x bigger), i think i get it now… it is for other purpose.
Gracias

AuthorPosts
You must be logged in to reply to this topic.