[alsa-devel] [PATCH 0/2] ALSA: nm256: Fine-tuning for three function implementations
From: Markus Elfring elfring@users.sourceforge.net Date: Thu, 16 Nov 2017 18:00:18 +0100
Two update suggestions were taken into account from static source code analysis.
Markus Elfring (2): Adjust five function calls together with a variable assignment Use common error handling code in snd_nm256_probe()
sound/pci/nm256/nm256.c | 48 +++++++++++++++++++++++++++++------------------- 1 file changed, 29 insertions(+), 19 deletions(-)
From: Markus Elfring elfring@users.sourceforge.net Date: Thu, 16 Nov 2017 17:37:26 +0100
The script "checkpatch.pl" pointed information out like the following.
ERROR: do not use assignment in if condition
Thus fix affected source code places.
Signed-off-by: Markus Elfring elfring@users.sourceforge.net --- sound/pci/nm256/nm256.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/sound/pci/nm256/nm256.c b/sound/pci/nm256/nm256.c index b97f4ea6b56c..35ce5b3fd81a 100644 --- a/sound/pci/nm256/nm256.c +++ b/sound/pci/nm256/nm256.c @@ -1334,7 +1334,8 @@ snd_nm256_mixer(struct nm256 *chip) if (! chip->ac97_regs) return -ENOMEM;
- if ((err = snd_ac97_bus(chip->card, 0, &ops, NULL, &pbus)) < 0) + err = snd_ac97_bus(chip->card, 0, &ops, NULL, &pbus); + if (err < 0) return err;
memset(&ac97, 0, sizeof(ac97)); @@ -1493,7 +1494,8 @@ snd_nm256_create(struct snd_card *card, struct pci_dev *pci,
*chip_ret = NULL;
- if ((err = pci_enable_device(pci)) < 0) + err = pci_enable_device(pci); + if (err < 0) return err;
chip = kzalloc(sizeof(*chip), GFP_KERNEL); @@ -1585,7 +1587,8 @@ snd_nm256_create(struct snd_card *card, struct pci_dev *pci, chip->buffer_end = buffer_top; else { /* get buffer end pointer from signature */ - if ((err = snd_nm256_peek_for_sig(chip)) < 0) + err = snd_nm256_peek_for_sig(chip); + if (err < 0) goto __error; }
@@ -1635,7 +1638,8 @@ snd_nm256_create(struct snd_card *card, struct pci_dev *pci,
// pci_set_master(pci); /* needed? */ - if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops)) < 0) + err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops); + if (err < 0) goto __error;
*chip_ret = chip; @@ -1717,7 +1721,9 @@ static int snd_nm256_probe(struct pci_dev *pci, capture_bufsize = 4; if (capture_bufsize > 128) capture_bufsize = 128; - if ((err = snd_nm256_create(card, pci, &chip)) < 0) { + + err = snd_nm256_create(card, pci, &chip); + if (err < 0) { snd_card_free(card); return err; } @@ -1744,7 +1750,8 @@ static int snd_nm256_probe(struct pci_dev *pci, card->shortname, chip->buffer_addr, chip->cport_addr, chip->irq);
- if ((err = snd_card_register(card)) < 0) { + err = snd_card_register(card); + if (err < 0) { snd_card_free(card); return err; }
From: Markus Elfring elfring@users.sourceforge.net Date: Thu, 16 Nov 2017 17:51:17 +0100
This issue was detected by using the Coccinelle software.
* Add a jump target so that a bit of exception handling can be better reused at the end of this function.
This issue was detected by using the Coccinelle software.
* The script "checkpatch.pl" pointed information out like the following.
ERROR: do not use assignment in if condition
Thus fix two affected source code places.
Signed-off-by: Markus Elfring elfring@users.sourceforge.net --- sound/pci/nm256/nm256.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/sound/pci/nm256/nm256.c b/sound/pci/nm256/nm256.c index 35ce5b3fd81a..3d6106800ae9 100644 --- a/sound/pci/nm256/nm256.c +++ b/sound/pci/nm256/nm256.c @@ -1706,8 +1706,8 @@ static int snd_nm256_probe(struct pci_dev *pci, break; default: dev_err(&pci->dev, "invalid device id 0x%x\n", pci->device); - snd_card_free(card); - return -EINVAL; + err = -EINVAL; + goto free_card; }
if (vaio_hack) @@ -1723,10 +1723,9 @@ static int snd_nm256_probe(struct pci_dev *pci, capture_bufsize = 128;
err = snd_nm256_create(card, pci, &chip); - if (err < 0) { - snd_card_free(card); - return err; - } + if (err < 0) + goto free_card; + card->private_data = chip;
if (reset_workaround) { @@ -1739,11 +1738,13 @@ static int snd_nm256_probe(struct pci_dev *pci, chip->reset_workaround_2 = 1; }
- if ((err = snd_nm256_pcm(chip, 0)) < 0 || - (err = snd_nm256_mixer(chip)) < 0) { - snd_card_free(card); - return err; - } + err = snd_nm256_pcm(chip, 0); + if (err < 0) + goto free_card; + + err = snd_nm256_mixer(chip); + if (err < 0) + goto free_card;
sprintf(card->shortname, "NeoMagic %s", card->driver); sprintf(card->longname, "%s at 0x%lx & 0x%lx, irq %d", @@ -1751,13 +1752,15 @@ static int snd_nm256_probe(struct pci_dev *pci, chip->buffer_addr, chip->cport_addr, chip->irq);
err = snd_card_register(card); - if (err < 0) { - snd_card_free(card); - return err; - } + if (err < 0) + goto free_card;
pci_set_drvdata(pci, card); return 0; + +free_card: + snd_card_free(card); + return err; }
static void snd_nm256_remove(struct pci_dev *pci)
On Thu, 16 Nov 2017 18:05:27 +0100, SF Markus Elfring wrote:
From: Markus Elfring elfring@users.sourceforge.net Date: Thu, 16 Nov 2017 18:00:18 +0100
Two update suggestions were taken into account from static source code analysis.
Markus, I'd apply this kind of patches only when they are really tested on the hardware, or they were converted systematically by a script like spatch. The reason is that you might break something (and you already broke things in the past). The merit by such a patch is negligible in comparison of the risk of breakage. These codes aren't too bad without fixing, after all; everyone can read it pretty well as is.
If these patches were tested on a real hardware, or at least on VM, so that you can show that they don't break anything, I'll happily apply them for the next (4.16) kernel. In that case, please show that in the changelog explicitly.
Or, if you're really working on other real changes (no cosmetic coding style fixes nor the code shuffling, but fixing a real bug) *and* such a cleanup is mandatory as preliminary, it can be accepted, too.
thanks,
Takashi
Two update suggestions were taken into account from static source code analysis.
Markus, I'd apply this kind of patches only when they are really tested on the hardware,
I can not test all software and hardware combinations (so far) for which I dare to show change possibilities.
or they were converted systematically by a script like spatch.
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
The reason is that you might break something
There are the usual software development risks.
(and you already broke things in the past).
I presented also some improvable update suggestions.
Another look on the corresponding circumstances might be interesting for further clarification.
The merit by such a patch is negligible in comparison of the risk of breakage.
I imagine that you might like a small object code reduction also for this software module.
These codes aren't too bad without fixing, after all; everyone can read it pretty well as is.
The script "checkpatch.pl" points implementation details out for further considerations.
If these patches were tested on a real hardware,
I assume that this aspect can become a big challenge.
or at least on VM, so that you can show that they don't break anything,
Which test results would you trust (from me)?
I'll happily apply them for the next (4.16) kernel.
Thanks for your general offer.
Or, if you're really working on other real changes
I would find a bit more efficient exception handling useful.
(no cosmetic coding style fixes nor the code shuffling,
I propose to apply also corresponding checkpatch cosmetic.
but fixing a real bug)
I am trying to adjust the software situation a bit more for better run time characteristics.
*and* such a cleanup is mandatory as preliminary, it can be accepted, too.
There are change combinations needed for the proposed software design direction. Can you see positive effects here?
Regards, Markus
On Thu, 16 Nov 2017 18:48:43 +0100, SF Markus Elfring wrote:
Two update suggestions were taken into account from static source code analysis.
Markus, I'd apply this kind of patches only when they are really tested on the hardware,
I can not test all software and hardware combinations (so far) for which I dare to show change possibilities.
or they were converted systematically by a script like spatch.
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands, and can be wrong -- that's the heart of the problem. The risk is bigger than the merit by applying the patch.
So, just prove that your patch doesn't break anything. Doesn't matter whether it's a test with real hardware or with systematic checks. Once when it's confirmed, we can apply it. A very simple rule, and this will be valid for most of other subsystems, too.
thanks,
Takashi
The reason is that you might break something
There are the usual software development risks.
(and you already broke things in the past).
I presented also some improvable update suggestions.
Another look on the corresponding circumstances might be interesting for further clarification.
The merit by such a patch is negligible in comparison of the risk of breakage.
I imagine that you might like a small object code reduction also for this software module.
These codes aren't too bad without fixing, after all; everyone can read it pretty well as is.
The script "checkpatch.pl" points implementation details out for further considerations.
If these patches were tested on a real hardware,
I assume that this aspect can become a big challenge.
or at least on VM, so that you can show that they don't break anything,
Which test results would you trust (from me)?
I'll happily apply them for the next (4.16) kernel.
Thanks for your general offer.
Or, if you're really working on other real changes
I would find a bit more efficient exception handling useful.
(no cosmetic coding style fixes nor the code shuffling,
I propose to apply also corresponding checkpatch cosmetic.
but fixing a real bug)
I am trying to adjust the software situation a bit more for better run time characteristics.
*and* such a cleanup is mandatory as preliminary, it can be accepted, too.
There are change combinations needed for the proposed software design direction. Can you see positive effects here?
Regards, Markus
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands,
I am navigating through possible changes around the pattern “Use common error handling code” mostly manually so far.
and can be wrong
Such a possibility remains as usual.
-- that's the heart of the problem.
There might be related opportunities for further improvements. Do you trust adjustments from an evolving tool more than my concrete contributions?
The risk is bigger than the merit by applying the patch.
I suggest to reconsider this view.
Would you dare to follow any of the presented arguments?
So, just prove that your patch doesn't break anything.
Which kind of information would you find sufficient for a “prove”?
Doesn't matter whether it's a test with real hardware or with systematic checks.
I assume that your development concerns matter more in this case.
Once when it's confirmed, we can apply it.
I am curious if other contributors will become interested to confirm something.
A very simple rule,
It might occasionally look simpler than it is in “special cases”.
and this will be valid for most of other subsystems, too.
The response is also varying there as usual.
A few update suggestions from the discussed pattern were integrated (also by you) already. Would you like to continue with similar support in any ways?
Regards, Markus
On Thu, 16 Nov 2017 20:30:24 +0100, SF Markus Elfring wrote:
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands,
I am navigating through possible changes around the pattern “Use common error handling code” mostly manually so far.
and can be wrong
Such a possibility remains as usual.
"As usual" doesn't suffice. It must be "almost perfect" for such a code refactoring. The damage by a overseen mistake is much higher than the merit by such a patch.
If the patch is about fixing a bug, it's a different story. Or it's about a really trivial change (e.g. your sizeof() conversion patches), I can check and apply easily. But for other changes with more lines, it makes little sense. Again, the risk of breakage increases while the merit is negligible.
-- that's the heart of the problem.
There might be related opportunities for further improvements. Do you trust adjustments from an evolving tool more than my concrete contributions?
Yes, loudly.
I stop at this point, as the rest is simply a repeat from the previous mail.
thanks,
Takashi
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands,
I am navigating through possible changes around the pattern “Use common error handling code” mostly manually so far.
and can be wrong
Such a possibility remains as usual.
"As usual" doesn't suffice.
There can be additional means be used to reduce the probability of undesired side effects.
It must be "almost perfect" for such a code refactoring.
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
The damage by a overseen mistake is much higher than the merit by such a patch.
Are there any more software developers and code reviewers available who would like to point another programming mistake out for this Linux module?
If the patch is about fixing a bug, it's a different story.
How do “deviations” from the coding style and the evolution of object code size fit to this view here?
Or it's about a really trivial change (e.g. your sizeof() conversion patches), I can check and apply easily.
My update selection can contain also trivial adjustments.
But for other changes with more lines, it makes little sense.
Do you need any more information to see and eventually accept the sense again?
Again, the risk of breakage increases while the merit is negligible.
We disagree about corresponding benefits at the moment. Would any other contributors comment the situation a bit more?
-- that's the heart of the problem.
There might be related opportunities for further improvements. Do you trust adjustments from an evolving tool more than my concrete contributions?
Yes, loudly.
I noticed that the development status of tools which you might find nice at the moment can be also questionable.
I stop at this point, as the rest is simply a repeat from the previous mail.
Are you using a continuous integration system?
Regards, Markus
On Tue, 28 Nov 2017 09:19:48 +0100, SF Markus Elfring wrote:
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands,
I am navigating through possible changes around the pattern “Use common error handling code” mostly manually so far.
and can be wrong
Such a possibility remains as usual.
"As usual" doesn't suffice.
There can be additional means be used to reduce the probability of undesired side effects.
Irrelevant, it doesn't fix a bug, nor dramatic improvement.
It must be "almost perfect" for such a code refactoring.
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
Impression doesn't matter. The question is whether it's 100% correct or not in such a case.
The damage by a overseen mistake is much higher than the merit by such a patch.
Are there any more software developers and code reviewers available who would like to point another programming mistake out for this Linux module?
If you have find such, then it's fine, you can get your patches reviewed and more assured. But in the current situation, no one else is interested in it, and that's going to nowhere.
If the patch is about fixing a bug, it's a different story.
How do “deviations” from the coding style and the evolution of object code size fit to this view here?
Again, it's no fix for a bug.
Or it's about a really trivial change (e.g. your sizeof() conversion patches), I can check and apply easily.
My update selection can contain also trivial adjustments.
The *really* trivial ones were applied. The rest are not.
But for other changes with more lines, it makes little sense.
Do you need any more information to see and eventually accept the sense again?
No. This kind of code refactoring has no more information. It's a "trivial" change, after all.
Again, the risk of breakage increases while the merit is negligible.
We disagree about corresponding benefits at the moment. Would any other contributors comment the situation a bit more?
I hear openly.
-- that's the heart of the problem.
There might be related opportunities for further improvements. Do you trust adjustments from an evolving tool more than my concrete contributions?
Yes, loudly.
I noticed that the development status of tools which you might find nice at the moment can be also questionable.
It depends on the result.
I stop at this point, as the rest is simply a repeat from the previous mail.
Are you using a continuous integration system?
Not really in my side. But there are others doing that.
Takashi
There can be additional means be used to reduce the probability of undesired side effects.
Irrelevant,
I got an other opinion here.
it doesn't fix a bug,
Did I suggest to correct a coding style “bug”?
nor dramatic improvement.
I agree that the change could be small only for this software module alone. I guess that we discuss not only change patterns for this one but also other affected modules here (besides a concrete example). The result summary might be more significant overall.
It must be "almost perfect" for such a code refactoring.
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
Impression doesn't matter.
It seems then that you can not get the kind of information you might be looking for at the moment from me (alone).
The question is whether it's 100% correct or not in such a case.
Would any other source code reviewers like to provide a corresponding acknowledgement for concrete changes?
Are there any more software developers and code reviewers available who would like to point another programming mistake out for this Linux module?
If you have find such, then it's fine, you can get your patches reviewed and more assured.
I hope that mailing list readers could offer something.
But in the current situation, no one else is interested in it, and that's going to nowhere.
Did this software module become “too old”?
The *really* trivial ones were applied. The rest are not.
Can higher level transformation patterns become easier to accept by any other means?
Do you need any more information to see and eventually accept the sense again?
No. This kind of code refactoring has no more information. It's a "trivial" change, after all.
Would you like to distinguish the possible update steps better to avoid further confusion around “triviality”?
Are you using a continuous integration system?
Not really in my side. But there are others doing that.
How much does the omission of such an useful development tool influence your concerns?
Would you like to improve the software situation in any ways there?
Regards, Markus
On Tue, 28 Nov 2017 10:50:21 +0100, SF Markus Elfring wrote:
There can be additional means be used to reduce the probability of undesired side effects.
Irrelevant,
I got an other opinion here.
Not from me.
it doesn't fix a bug,
Did I suggest to correct a coding style “bug”?
No. A coding style issue is never a bug.
nor dramatic improvement.
I agree that the change could be small only for this software module alone. I guess that we discuss not only change patterns for this one but also other affected modules here (besides a concrete example). The result summary might be more significant overall.
No.
It must be "almost perfect" for such a code refactoring.
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
Impression doesn't matter.
It seems then that you can not get the kind of information you might be looking for at the moment from me (alone).
No, the patch itself speaks.
The question is whether it's 100% correct or not in such a case.
Would any other source code reviewers like to provide a corresponding acknowledgement for concrete changes?
If you get more reviewed-by from others, it means already it's safer to apply. Then I can take it. But without that, it's obviously no material to take.
Are there any more software developers and code reviewers available who would like to point another programming mistake out for this Linux module?
If you have find such, then it's fine, you can get your patches reviewed and more assured.
I hope that mailing list readers could offer something.
Let's hope.
But in the current situation, no one else is interested in it, and that's going to nowhere.
Did this software module become “too old”?
Mostly the hardware is too old, or the change itself isn't interesting enough.
The *really* trivial ones were applied. The rest are not.
Can higher level transformation patterns become easier to accept by any other means?
Only if it's assured to work and not to break anything else.
Do you need any more information to see and eventually accept the sense again?
No. This kind of code refactoring has no more information. It's a "trivial" change, after all.
Would you like to distinguish the possible update steps better to avoid further confusion around “triviality”?
Learn from the past.
Are you using a continuous integration system?
Not really in my side. But there are others doing that.
How much does the omission of such an useful development tool influence your concerns?
Can't judge unless I really see / use it.
Would you like to improve the software situation in any ways there?
I *hope*, but only when it's not too annoying.
Takashi
It seems then that you can not get the kind of information you might be looking for at the moment from me (alone).
No, the patch itself speaks.
Are we still trying to clarify (only) two possible update steps for this software module?
If you get more reviewed-by from others, it means already it's safer to apply. Then I can take it.
How are the statistics for such tags in the sound subsystem?
But without that, it's obviously no material to take.
Thanks for such an explanation of your current view.
I hope that mailing list readers could offer something.
Let's hope.
Are any additional communication interfaces helpful?
Did this software module become “too old”?
Mostly the hardware is too old,
Which time frames have you got in mind for acceptable software maintenance?
or the change itself isn't interesting enough.
This is another general possibility.
Can higher level transformation patterns become easier to accept by any other means?
Only if it's assured to work and not to break anything else.
Have you got any steps in mind for an improved “feeling” or “assurance”?
How much does the omission of such an useful development tool influence your concerns?
Can't judge unless I really see / use it.
I find that there are some options to consider.
Would you like to improve the software situation in any ways there?
I *hope*, but only when it's not too annoying.
Under which circumstances are you going to start working with a continuous integration system?
Regards, Markus
On Tue, 28 Nov 2017 13:33:48 +0100, SF Markus Elfring wrote:
It seems then that you can not get the kind of information you might be looking for at the moment from me (alone).
No, the patch itself speaks.
Are we still trying to clarify (only) two possible update steps for this software module?
No.
If you get more reviewed-by from others, it means already it's safer to apply. Then I can take it.
How are the statistics for such tags in the sound subsystem?
Just like other subsystems, with usual reviewed-by tags.
But without that, it's obviously no material to take.
Thanks for such an explanation of your current view.
I hope that mailing list readers could offer something.
Let's hope.
Are any additional communication interfaces helpful?
No idea.
Did this software module become “too old”?
Mostly the hardware is too old,
Which time frames have you got in mind for acceptable software maintenance?
It's not a question to me but to users.
or the change itself isn't interesting enough.
This is another general possibility.
Can higher level transformation patterns become easier to accept by any other means?
Only if it's assured to work and not to break anything else.
Have you got any steps in mind for an improved “feeling” or “assurance”?
Just do proper testing. Either on a real hardware or on a VM.
How much does the omission of such an useful development tool influence your concerns?
Can't judge unless I really see / use it.
I find that there are some options to consider.
Would you like to improve the software situation in any ways there?
I *hope*, but only when it's not too annoying.
Under which circumstances are you going to start working with a continuous integration system?
It's irrelevant, don't side-step.
Takashi
Are any additional communication interfaces helpful?
No idea.
* Can an ordinary telephone call help?
* Will a meeting during a Linux conference be needed?
Have you got any steps in mind for an improved “feeling” or “assurance”?
Just do proper testing. Either on a real hardware or on a VM.
Which test results would you like to see or hear (!) from a real device (or a configuration in a virtual machine)?
Under which circumstances are you going to start working with a continuous integration system?
It's irrelevant, don't side-step.
I find such a development tool very relevant to reduce your concerns.
Regards, Markus
On Tue, 28 Nov 2017 14:00:39 +0100, SF Markus Elfring wrote:
Are any additional communication interfaces helpful?
No idea.
Can an ordinary telephone call help?
Will a meeting during a Linux conference be needed?
No and no.
Have you got any steps in mind for an improved “feeling” or “assurance”?
Just do proper testing. Either on a real hardware or on a VM.
Which test results would you like to see or hear (!) from a real device (or a configuration in a virtual machine)?
I don't mind either case as long as the test works.
Under which circumstances are you going to start working with a continuous integration system?
It's irrelevant, don't side-step.
I find such a development tool very relevant to reduce your concerns.
It's about your patches, not my system.
Takashi
Which test results would you like to see or hear (!) from a real device (or a configuration in a virtual machine)?
I don't mind either case as long as the test works.
How would you notice that a corresponding system test worked in reasonable ways?
I find such a development tool very relevant to reduce your concerns.
It's about your patches, not my system.
Your own automatic test system could provide a bit of more confidence for some change possibilities, couldn't it?
Regards, Markus
On Tue, 28 Nov 2017 14:17:00 +0100, SF Markus Elfring wrote:
Which test results would you like to see or hear (!) from a real device (or a configuration in a virtual machine)?
I don't mind either case as long as the test works.
How would you notice that a corresponding system test worked in reasonable ways?
It needs a trust to the patch author or the tester who reported that it worked. The test result should be mentioned concisely.
I find such a development tool very relevant to reduce your concerns.
It's about your patches, not my system.
Your own automatic test system could provide a bit of more confidence for some change possibilities, couldn't it?
You shouldn't rely on my system. The main point is your patch itself; make your patch more reliable.
Takashi
How would you notice that a corresponding system test worked in reasonable ways?
It needs a trust to the patch author or the tester who reported that it worked.
Can this aspect vary over time?
The test result should be mentioned concisely.
How do you think about to introduce accepted automatic test procedures?
You shouldn't rely on my system.
Did this system get sufficient trust so far?
The main point is your patch itself; make your patch more reliable.
It seems that I can make my adjustments only a bit more interesting by positive review comments from other contributors (if you can not become convinced by the concrete source code changes).
Regards, Markus
On Tue, 28 Nov 2017 15:19:55 +0100, SF Markus Elfring wrote:
How would you notice that a corresponding system test worked in reasonable ways?
It needs a trust to the patch author or the tester who reported that it worked.
Can this aspect vary over time?
Not really.
The test result should be mentioned concisely.
How do you think about to introduce accepted automatic test procedures?
If *you* do introduce automatic testing for *your* patches, then I appreciate it.
You shouldn't rely on my system.
Did this system get sufficient trust so far?
I can trust my system for my purpose.
The main point is your patch itself; make your patch more reliable.
It seems that I can make my adjustments only a bit more interesting by positive review comments from other contributors (if you can not become convinced by the concrete source code changes).
Yes.
Takashi
If *you* do introduce automatic testing for *your* patches, then I appreciate it.
How would such a setting influence my trust level for your subsystem maintenance?
I can trust my system for my purpose.
I need to trust it also somehow.
Regards, Markus
On Tue, 28 Nov 2017 15:33:45 +0100, SF Markus Elfring wrote:
If *you* do introduce automatic testing for *your* patches, then I appreciate it.
How would such a setting influence my trust level for your subsystem maintenance?
Give the test result before speaking too much. It's already way too contra-productive, just chatting. Show the result at first.
Takashi
How would such a setting influence my trust level for your subsystem maintenance?
Give the test result before speaking too much.
Which concrete data do you expect here?
It's already way too contra-productive, just chatting.
I got an other view.
I hope that we can achieve a better common understanding also in this case.
Show the result at first.
Which one would you like to compare exactly?
Regards, Markus
On Tue, 28 Nov 2017 15:44:07 +0100, SF Markus Elfring wrote:
How would such a setting influence my trust level for your subsystem maintenance?
Give the test result before speaking too much.
Which concrete data do you expect here?
Depends on the result. The bottom line is that you run your patched kernel on the real hardware or equivalent (VM or emulation) for the device you touched.
It's already way too contra-productive, just chatting.
I got an other view.
I hope that we can achieve a better common understanding also in this case.
Hopefully, if you stop asking another new question for each word.
Show the result at first.
Which one would you like to compare exactly?
Run your patched kernel and the driver code on the real machine with the corresponding device. Show the device is running. That's the very first step. Then follow the more detailed tests, but it depends on the subsystem.
Takashi
Give the test result before speaking too much.
Which concrete data do you expect here?
Depends on the result.
How can this vary?
The bottom line is that you run your patched kernel on the real hardware
Which test configurations would you trust finally?
or equivalent (VM or emulation) for the device you touched.
Can all the devices for which I dared to adjust their source code a bit tested in desired ways within virtual machines?
Run your patched kernel and the driver code on the real machine with the corresponding device. Show the device is running. That's the very first step. Then follow the more detailed tests, but it depends on the subsystem.
How can such descriptions improve the trust situation?
Regards, Markus
On Tue, 28 Nov 2017 16:01:47 +0100, SF Markus Elfring wrote:
Give the test result before speaking too much.
Which concrete data do you expect here?
Depends on the result.
How can this vary?
How? Because I didn't see any test result from you, so I can't trust you.
The bottom line is that you run your patched kernel on the real hardware
Which test configurations would you trust finally?
Do test whatever like the users do.
or equivalent (VM or emulation) for the device you touched.
Can all the devices for which I dared to adjust their source code a bit tested in desired ways within virtual machines?
No, you need to run the kernel exactly to be used by the user of the target hardware.
Run your patched kernel and the driver code on the real machine with the corresponding device. Show the device is running. That's the very first step. Then follow the more detailed tests, but it depends on the subsystem.
How can such descriptions improve the trust situation?
It's the first step. At least then I can see you did some test. Currently nothing. zero. nada. How can I trust it?
Takashi
Because I didn't see any test result from you,
This is correct so far.
so I can't trust you.
This view did not hinder you to integrate some of my update suggestions which you found easier to handle.
Which test configurations would you trust finally?
Do test whatever like the users do.
I find such an information too unsafe for an official acceptance test.
How can such descriptions improve the trust situation?
It's the first step. At least then I can see you did some test. Currently nothing. zero. nada.
I am unsure if acceptable test results will ever be published for this software module.
How can I trust it?
* Would you dare to inspect the shown source code adjustments again?
* How do you think about to sort the remaining update candidates by their change size (or software age)?
Regards, Markus
On Tue, 28 Nov 2017 17:15:27 +0100, SF Markus Elfring wrote:
Because I didn't see any test result from you,
This is correct so far.
so I can't trust you.
This view did not hinder you to integrate some of my update suggestions which you found easier to handle.
The really trivial things are different. Don't mix up things.
Which test configurations would you trust finally?
Do test whatever like the users do.
I find such an information too unsafe for an official acceptance test.
No-testing is the worst case.
How can such descriptions improve the trust situation?
It's the first step. At least then I can see you did some test. Currently nothing. zero. nada.
I am unsure if acceptable test results will ever be published for this software module.
Then forget about your patches.
How can I trust it?
- Would you dare to inspect the shown source code adjustments again?
Not unless you give some testing results.
- How do you think about to sort the remaining update candidates by their change size (or software age)?
Irrelevant.
Takashi
No-testing is the worst case.
Free software development can occasionally mean that system tests can also be performed by a person who is different from the initial programmer.
I am unsure if acceptable test results will ever be published for this software module.
Then forget about your patches.
I published some change possibilities. Now the chances are varying to receive constructive review comments for them as usual.
- How do you think about to sort the remaining update candidates by their change size (or software age)?
Irrelevant.
Do you want that I point any other patch out which you could find easier to handle again?
Regards, Markus
On Tue, 28 Nov 2017 17:40:18 +0100, SF Markus Elfring wrote:
No-testing is the worst case.
Free software development can occasionally mean that system tests can also be performed by a person who is different from the initial programmer.
So ask someone for testing. If you can find them and confirm that the patch works, I'll happily apply. If not, it goes nowhere.
- How do you think about to sort the remaining update candidates by their change size (or software age)?
Irrelevant.
Do you want that I point any other patch out which you could find easier to handle again?
Only if they got tested and/or got reviewed by others.
Takashi
Do you want that I point any other patch out which you could find easier to handle again?
Only if they got tested and/or got reviewed by others.
Would you find any of the following update suggestions trivial enough to integrate them?
Examples: * ALSA: cs5530: Use common error handling code in snd_cs5530_probe() https://lkml.org/lkml/2017/11/18/266 https://patchwork.kernel.org/patch/10064945/ https://lkml.kernel.org/r/a2cb6494-a01c-668a-cd9b-1a8428a0d4c9@users.sourceforge.net
* ASoC: dapm: Use kcalloc() in snd_soc_dapm_new_widgets() https://lkml.org/lkml/2017/8/10/425 https://patchwork.kernel.org/patch/9893713/ https://lkml.kernel.org/r/62be9e32-1b8d-3980-7998-72f82c62a6fe@users.sourceforge.net
* ALSA: seq: Delete unnecessary checks in snd_seq_ioctl_query_subs() https://lkml.org/lkml/2017/1/24/293 https://patchwork.kernel.org/patch/9534795/ https://lkml.kernel.org/r/39bc00c0-ffe7-851c-f329-e7d499f561be@users.sourceforge.net
Regards, Markus
On Tue, 28 Nov 2017 18:15:34 +0100, SF Markus Elfring wrote:
Do you want that I point any other patch out which you could find easier to handle again?
Only if they got tested and/or got reviewed by others.
Would you find any of the following update suggestions trivial enough to integrate them?
Examples:
- ALSA: cs5530: Use common error handling code in snd_cs5530_probe() https://lkml.org/lkml/2017/11/18/266 https://patchwork.kernel.org/patch/10064945/ https://lkml.kernel.org/r/a2cb6494-a01c-668a-cd9b-1a8428a0d4c9@users.sourceforge.net
This is no trivial patch.
- ASoC: dapm: Use kcalloc() in snd_soc_dapm_new_widgets() https://lkml.org/lkml/2017/8/10/425 https://patchwork.kernel.org/patch/9893713/ https://lkml.kernel.org/r/62be9e32-1b8d-3980-7998-72f82c62a6fe@users.sourceforge.net
This needs to go through Mark's ASoC tree.
- ALSA: seq: Delete unnecessary checks in snd_seq_ioctl_query_subs() https://lkml.org/lkml/2017/1/24/293 https://patchwork.kernel.org/patch/9534795/ https://lkml.kernel.org/r/39bc00c0-ffe7-851c-f329-e7d499f561be@users.sourceforge.net
This is no trivial patch, either.
Takashi
Examples:
- ALSA: cs5530: Use common error handling code in snd_cs5530_probe() https://lkml.org/lkml/2017/11/18/266 https://patchwork.kernel.org/patch/10064945/ https://lkml.kernel.org/r/a2cb6494-a01c-668a-cd9b-1a8428a0d4c9@users.sourceforge.net
This is no trivial patch.
Why do you find this one more challenging now than a similar one?
ALSA: maestro3: Use common error handling code in two functions https://lkml.org/lkml/2017/9/6/39 https://patchwork.kernel.org/patch/9939985/ https://lkml.kernel.org/r/83b1ba49-253d-72ed-b3b0-ec7d5e72a12c@users.sourceforge.net
Regards, Markus
On Tue, 28 Nov 2017 20:08:13 +0100, SF Markus Elfring wrote:
Examples:
- ALSA: cs5530: Use common error handling code in snd_cs5530_probe() https://lkml.org/lkml/2017/11/18/266 https://patchwork.kernel.org/patch/10064945/ https://lkml.kernel.org/r/a2cb6494-a01c-668a-cd9b-1a8428a0d4c9@users.sourceforge.net
This is no trivial patch.
Why do you find this one more challenging now than a similar one?
ALSA: maestro3: Use common error handling code in two functions https://lkml.org/lkml/2017/9/6/39 https://patchwork.kernel.org/patch/9939985/ https://lkml.kernel.org/r/83b1ba49-253d-72ed-b3b0-ec7d5e72a12c@users.sourceforge.net
Because it turned out that your patch can be wrong and broken.
Takashi
On Tue, 28 Nov 2017 20:48:52 +0100, SF Markus Elfring wrote:
Because it turned out that your patch can be wrong and broken.
At which point did you change you mind for any of my (higher level) patches?
Since your patch brought a regression in the past.
Takashi
At which point did you change you mind for any of my (higher level) patches?
Since your patch brought a regression in the past.
Would you like to discuss the circumstances for the one glitch to which you might refer to?
Regards, Markus
On Tue, 28 Nov 2017 20:57:17 +0100, SF Markus Elfring wrote:
At which point did you change you mind for any of my (higher level) patches?
Since your patch brought a regression in the past.
Would you like to discuss the circumstances for the one glitch to which you might refer to?
No need for discussion. It's difficult to recover a lost trust.
The best way is to show how you don't fall into the same issue any longer, and it essentially means the actual testing of the patches. Now it's clear why the testing is demanded? There is no other way.
Takashi
Would you like to discuss the circumstances for the one glitch to which you might refer to?
No need for discussion.
I disagree to this view again.
It's difficult to recover a lost trust.
I can follow this view to some degree.
But I find that the current might point also other weaknesses out in the general software development process.
The best way is to show how you don't fall into the same issue any longer,
Your expectations go into lower failure probabilities. But you might become disappointed again because of human work in general.
and it essentially means the actual testing of the patches.
I find that corresponding progress depends then also on reasonable and accepted procedures from trusted test environments.
Now it's clear why the testing is demanded?
I can follow your desire to some degree.
There is no other way.
There are more (technical) possibilities to consider where development tools like a continuous integration system can help.
Regards, Markus
On Tue, 28 Nov 2017 21:18:01 +0100, SF Markus Elfring wrote:
Would you like to discuss the circumstances for the one glitch to which you might refer to?
No need for discussion.
I disagree to this view again.
So, that's all. Think how the things can be improved in your side, not blaming others.
Takashi
So, that's all. Think how the things can be improved in your side,
I know a few possibilities here.
not blaming others.
But I know also a few other approaches where higher level development tools help to improve the patch review process in significant ways for trusted system environments.
Regards, Markus
My update selection can contain also trivial adjustments.
The *really* trivial ones were applied.
Did you leave a few from this change category over which could be integrated a bit later?
Regards, Markus
On Tuesday 28 November 2017, SF Markus Elfring wrote:
There is a general source code transformation pattern involved. So I find that it is systematic.
But I did not dare to develop a script variant for the semantic patch language (Coccinelle software) which can handle all special use cases as a few of them are already demonstrated in this tiny patch series.
Then you're doing everything by hands,
I am navigating through possible changes around the pattern “Use common error handling code” mostly manually so far.
and can be wrong
Such a possibility remains as usual.
"As usual" doesn't suffice.
There can be additional means be used to reduce the probability of undesired side effects.
It must be "almost perfect" for such a code refactoring.
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
Have you tested the driver? Probably not. Please don't "improve" working drivers unless you have the hardware to test your changes. Patches like this are known to cause regressions. If the hardware is rare (like the NM256), the regression can hit years later when someone with such HW upgrades distro (e.g. Debian stable).
Can you get the impression that the shown transformation patterns were correctly applied for the source file “sound/pci/nm256/nm256.c”?
Have you tested the driver?
Which results do you expect from a corresponding system test?
Please don't "improve" working drivers unless
I am trying to improve software components also for devices to which I have not got direct access so far.
Patches like this are known to cause regressions.
Do you prefer to keep this software area “frozen” anyhow?
If the hardware is rare (like the NM256), the regression can hit years later when someone with such HW upgrades distro (e.g. Debian stable).
Have you got any resources to reduce corresponding concerns?
Regards, Markus
participants (3)
-
Ondrej Zary
-
SF Markus Elfring
-
Takashi Iwai