从子进程到控制台的fork()和STDOUT / STDERR

时间:2022-12-07 22:12:45

I'm writing a program that forks multiple child processes and I'd like for all of these child processes to be able to write lines to STDERR and STDOUT without the output being garbled. I'm not doing anything fancy, just emitting lines that end with a new line (that, at least in my understanding would be an atomic operation for Linux). From perlfaq it says:

我正在编写一个程序,它会分叉多个子进程,我希望所有这些子进程能够在没有输出乱码的情况下将行写入STDERR和STDOUT。我没有做任何花哨的事情,只是发出以新行结束的行(至少在我的理解中,这将是Linux的原子操作)。从perlfaq它说:

Both the main process and the backgrounded one (the "child" process) share the same STDIN, STDOUT and STDERR filehandles. If both try to access them at once, strange things can happen. You may want to close or reopen these for the child. You can get around this with opening a pipe (see open) but on some systems this means that the child process cannot outlive the parent.

主进程和后台进程(“子进程”)共​​享相同的STDIN,STDOUT和STDERR文件句柄。如果两者都试图立即访问它们,可能会发生奇怪的事情。您可能想要为孩子关闭或重新打开这些。您可以通过打开管道来解决这个问题(请参阅打开),但在某些系统上,这意味着子进程不能超过父进程。

It says I should "close or reopen" these filehandles for the child. Closing is simple, but what does it mean by "reopen"? I've tried something like this from within my child processes and it doesn't work (the output still gets garbled):

它说我应该为孩子“关闭或重新打开”这些文件句柄。关闭很简单,但“重新打开”是什么意思?我在我的子进程中尝试了类似这样的东西,它不起作用(输出仍然出现乱码):

open(SAVED_STDERR, '>&', \*STDERR) or die "Could not create copy of STDERR: $!";
close(STDERR);

# re-open STDERR
open(STDERR, '>&SAVED_STDERR') or die "Could not re-open STDERR: $!";

So, what am I doing wrong with this? What would the pipe example it alludes to look like? Is there a better way to coordinate output from multiple processes together to the console?

那么,我做错了什么呢?它所暗示的管道示例是什么样的?有没有更好的方法将多个进程的输出协调到控制台?

2 个解决方案

#1


9  

Writes to a filehandle are NOT atomic for STDOUT and STDIN. There are special cases for things like fifos but that's not your current situation.

写入文件句柄对STDOUT和STDIN来说不是原子的。像fifos这样的特殊情况,但这不是你目前的情况。

When it says re-open STDOUT what that means is "create a new STDOUT instance" This new instance isn't the same as the one from the parent. It's how you can have multiple terminals open on your system and not have all the STDOUT go to the same place.

当它说重新打开STDOUT意味着“创建一个新的STDOUT实例”这个新实例与父实例不同。这是你可以在你的系统上打开多个终端而不是让所有STDOUT到达同一个地方的方法。

The pipe solution would connect the child to the parent via a pipe (like | in the shell) and you'd need to have the parent read out of the pipe and multiplex the output itself. The parent would be responsible for reading from the pipe and ensuring that it doesn't interleave output from the pipe and output destined to the parent's STDOUT at the same time. There's an example and writeup here of pipes.

管道解决方案将通过管道(如shell中的|)将子节点连接到父节点,并且您需要将父节点从管道中读出并复用输出本身。父级将负责从管道读取并确保它不会同时交错管道的输出和输出目的地到父级的STDOUT。这里有一个关于管道的例子和文章。

A snippit:

use IO::Handle;

pipe(PARENTREAD, PARENTWRITE);
pipe(CHILDREAD, CHILDWRITE);

PARENTWRITE->autoflush(1);
CHILDWRITE->autoflush(1);

if ($child = fork) { # Parent code
   chomp($result = <PARENTREAD>);
   print "Got a value of $result from child\n";
   waitpid($child,0);
} else {
   print PARENTWRITE "FROM CHILD\n";
   exit;
}

See how the child doesn't write to stdout but rather uses the pipe to send a message to the parent, who does the writing with its stdout. Be sure to take a look as I omitted things like closing unneeded file handles.

看看孩子如何不写入stdout,而是使用管道向父母发送消息,父母用stdout写作。一定要看看,因为我省略了诸如关闭不需要的文件句柄之类的东西。

#2


1  

While this doesn't help your garbleness, it took me a long time to find a way to launch a child-process that can be written to by the parent process and have the stderr and stdout of the child process sent directly to the screen (this solves nasty blocking issues you may have when trying to read from two different FD's without using something fancy like select).

虽然这对你的粗暴无济于事,但我花了很长时间才找到启动子进程的方法,该进程可以由父进程写入并将子进程的stderr和stdout直接发送到屏幕(这解决了在尝试从两个不同的FD读取时可能遇到的令人讨厌的阻塞问题,而不使用像select这样的东西。

Once I figured it out, the solution was trivial

一旦我弄明白,解决方案就是微不足道的

my $pid = open3(*CHLD_IN, ">&STDERR", ">&STDOUT", 'some child program');
# write to child
print CHLD_IN "some message";
close(CHLD_IN);
waitpid($pid, 0);

Everything from "some child program" will be emitted to stdout/stderr, and you can simply pump data by writing to CHLD_IN and trust that it'll block if the child's buffer fills. To callers of the parent program, it all just looks like stderr/stdout.

“某些子程序”中的所有内容都将被发送到stdout / stderr,您可以通过写入CHLD_IN来简单地抽取数据,并相信如果孩子的缓冲区填满它会阻止它。对于父程序的调用者,它只是看起来像stderr / stdout。

#1


9  

Writes to a filehandle are NOT atomic for STDOUT and STDIN. There are special cases for things like fifos but that's not your current situation.

写入文件句柄对STDOUT和STDIN来说不是原子的。像fifos这样的特殊情况,但这不是你目前的情况。

When it says re-open STDOUT what that means is "create a new STDOUT instance" This new instance isn't the same as the one from the parent. It's how you can have multiple terminals open on your system and not have all the STDOUT go to the same place.

当它说重新打开STDOUT意味着“创建一个新的STDOUT实例”这个新实例与父实例不同。这是你可以在你的系统上打开多个终端而不是让所有STDOUT到达同一个地方的方法。

The pipe solution would connect the child to the parent via a pipe (like | in the shell) and you'd need to have the parent read out of the pipe and multiplex the output itself. The parent would be responsible for reading from the pipe and ensuring that it doesn't interleave output from the pipe and output destined to the parent's STDOUT at the same time. There's an example and writeup here of pipes.

管道解决方案将通过管道(如shell中的|)将子节点连接到父节点,并且您需要将父节点从管道中读出并复用输出本身。父级将负责从管道读取并确保它不会同时交错管道的输出和输出目的地到父级的STDOUT。这里有一个关于管道的例子和文章。

A snippit:

use IO::Handle;

pipe(PARENTREAD, PARENTWRITE);
pipe(CHILDREAD, CHILDWRITE);

PARENTWRITE->autoflush(1);
CHILDWRITE->autoflush(1);

if ($child = fork) { # Parent code
   chomp($result = <PARENTREAD>);
   print "Got a value of $result from child\n";
   waitpid($child,0);
} else {
   print PARENTWRITE "FROM CHILD\n";
   exit;
}

See how the child doesn't write to stdout but rather uses the pipe to send a message to the parent, who does the writing with its stdout. Be sure to take a look as I omitted things like closing unneeded file handles.

看看孩子如何不写入stdout,而是使用管道向父母发送消息,父母用stdout写作。一定要看看,因为我省略了诸如关闭不需要的文件句柄之类的东西。

#2


1  

While this doesn't help your garbleness, it took me a long time to find a way to launch a child-process that can be written to by the parent process and have the stderr and stdout of the child process sent directly to the screen (this solves nasty blocking issues you may have when trying to read from two different FD's without using something fancy like select).

虽然这对你的粗暴无济于事,但我花了很长时间才找到启动子进程的方法,该进程可以由父进程写入并将子进程的stderr和stdout直接发送到屏幕(这解决了在尝试从两个不同的FD读取时可能遇到的令人讨厌的阻塞问题,而不使用像select这样的东西。

Once I figured it out, the solution was trivial

一旦我弄明白,解决方案就是微不足道的

my $pid = open3(*CHLD_IN, ">&STDERR", ">&STDOUT", 'some child program');
# write to child
print CHLD_IN "some message";
close(CHLD_IN);
waitpid($pid, 0);

Everything from "some child program" will be emitted to stdout/stderr, and you can simply pump data by writing to CHLD_IN and trust that it'll block if the child's buffer fills. To callers of the parent program, it all just looks like stderr/stdout.

“某些子程序”中的所有内容都将被发送到stdout / stderr,您可以通过写入CHLD_IN来简单地抽取数据,并相信如果孩子的缓冲区填满它会阻止它。对于父程序的调用者,它只是看起来像stderr / stdout。