609 views
0 0 votes
Hello,

I trained a CNN using synthetic data to perform a segmentation task on human faces. During the test and to evaluate the prediction of this network, I used 200 examples from the database to compute precision and recall.

Is this number sufficient, knowing that I control myself the data generator and that I build the database by randomly drawing the elements using centered Gaussian distributions.


Thank you,

Please log in or register to answer this question.

Related questions

1 1 vote
1 1 answer
845
845 views
metelon asked Dec 15, 2020
845 views
When I standardized my data when I created my model. Do I need to save the standardization transformation when I want to predict with my model new data ?
0 0 votes
0 0 answers
631
631 views
Anas asked Nov 28, 2021
631 views
So say I have a column with categorical data like different styles of temperature: 'Lukewarm', 'Hot', 'Scalding', 'Cold', 'Frostbite',... etc.I know that we can use pd.ge...
0 0 votes
1 1 answer
1.1k
1.1k views
Kesz asked Nov 17, 2020
1,143 views
Hi. I have a question about model-based predictions when data is only available after the fact. Let me give you an example. I try to predict the result (HOME, AWAY or a D...
1 1 vote
0 0 answers
394
394 views
1 1 vote
1 answers 1 answer
1.5k
1.5k views
Kesz asked Oct 27, 2020
1,549 views
So far, I have modeled on known historical data. What if there are variables known only after the fact?Let me give you an example. I want to predict the outcome of the ma...