Skip to content

add support for compressibility of log file#29932

Merged
cpuguy83 merged 1 commit intomoby:masterfrom
miaoyq:container-log-add-archive
Mar 19, 2018
Merged

add support for compressibility of log file#29932
cpuguy83 merged 1 commit intomoby:masterfrom
miaoyq:container-log-add-archive

Conversation

@miaoyq
Copy link
Contributor

@miaoyq miaoyq commented Jan 6, 2017

Signed-off-by: Yanqiang Miao miao.yanqiang@zte.com.cn

- What I did
This PR adds support for compressibility of log file. I added a new option conpression for the jsonfile log driver, this option allows the user to specify compression algorithm to compress the log files.

- How I did it
The jsonfile will compress the history log file, except for container-id-json.log.1, when compression=gzip. The container-id-json.log.1 is not compressed in order to prevent the log tracking tool from losing some historical log data when a new log file is created.
- How to verify it
I run a container as follow:

$ docker run -d --log-opt max-size=1m --log-opt max-file=5 --log-opt compression=gzip busybox /bin/sh -c 'while true; do sleep 0.001; echo "hello!"; done'
b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447

The results are as follows:

$ ls -l
total 1360
-r--------    1 root     root        120003 Jan  5 10:48 b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447-json.log
-r--------    1 root     root       1000039 Jan  5 10:48 b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447-json.log.1
-rw-------    1 root     root         75813 Jan  5 10:48 b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447-json.log.2.gz
-rw-------    1 root     root         75795 Jan  5 10:48 b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447-json.log.3.gz
-rw-------    1 root     root         75814 Jan  5 10:48 b355968fc501942525f8c48a52201e62cc9e48b6b342b4372d60000feb59c447-json.log.4.gz
drwx------    2 root     root             6 Jan  5 10:47 checkpoints
-rw-r--r--    1 root     root          2532 Jan  5 10:47 config.v2.json
-rw-r--r--    1 root     root          1165 Jan  5 10:47 hostconfig.json
-rw-r--r--    1 root     root            13 Jan  5 10:47 hostname
-rw-r--r--    1 root     root           174 Jan  5 10:47 hosts
-rw-r--r--    1 root     root            75 Jan  5 10:47 resolv.conf
-rw-r--r--    1 root     root            71 Jan  5 10:47 resolv.conf.hash
drwxrwxrwt    2 root     root            40 Jan  5 10:47 shm

- Description for the changelog

- A picture of a cute animal (not mandatory but encouraged)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder this should be a string (e.g. "gzip") for extensibility

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AkihiroSuda The data source is an io.Reader interface type, so I think using io.Copy might be a bit more appropriate here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment, UT, and doc?

@AkihiroSuda AkihiroSuda added kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny status/1-design-review and removed status/0-triage labels Jan 6, 2017
@AkihiroSuda
Copy link
Member

Design SGTM

@AkihiroSuda AkihiroSuda added impact/changelog and removed kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny labels Jan 6, 2017
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compress -> compression?

@miaoyq
Copy link
Contributor Author

miaoyq commented Jan 6, 2017

Thanks @AkihiroSuda for review, I will refactor the code according to your comments.

@miaoyq miaoyq force-pushed the container-log-add-archive branch 2 times, most recently from ee9745d to 2fcbb08 Compare January 6, 2017 09:14
@thaJeztah
Copy link
Member

In general, the actual json-log log files are designed for internal use by docker, and not for external consumption. The rotated files are actually used when running docker logs (i.e., the reader loops over those files as well). I don't think we should implement this, if it's only to save space, as it (probably) makes these files no longer useful for docker logs, and using a different logging driver to collect logs centrally is a better solution for this.

TL;DR, I'm not sure we should implement this

/cc @cpuguy83

@cpuguy83
Copy link
Member

cpuguy83 commented Jan 11, 2017

I don't really see it as harmful. We may just want to compress by default (on rotation) and not even provide a configuration for it.
We can read from compressed files easily enough.
This way docker can store more logs in significantly less space.

@miaoyq miaoyq force-pushed the container-log-add-archive branch 6 times, most recently from ce83e84 to b42c1df Compare January 16, 2017 06:48
@miaoyq
Copy link
Contributor Author

miaoyq commented Jan 16, 2017

@thaJeztah @cpuguy83 @AkihiroSuda , Thank so much for your advice and guidance.
I have refactored the code, while the compressed files are also available for docker logs
Here the option compresstion=gzip still need to be set, instead of default

My test as follow:
step 1: Create a container with compression=gzip

$ docker run -d --log-opt max-size=1m --log-opt max-file=5 --log-opt compression=gzip busybox /bin/sh -c 'while true; do sleep 0.001; echo "$(date)"; done'
5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62

step 2: Check the log files list after about one minute

[node1] (local) root@10.0.11.3 /graph/containers/5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62
$ ls -l
total 1324
-r--------    1 root     root        142481 Jan 16 03:41 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log
-r--------    1 root     root       1000072 Jan 16 03:41 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log.1
-rw-r-----    1 root     root         60488 Jan 16 03:41 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log.2.gz
-rw-r-----    1 root     root         60134 Jan 16 03:40 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log.3.gz
-rw-r-----    1 root     root         60479 Jan 16 03:40 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log.4.gz
drwx------    2 root     root             6 Jan 16 03:38 checkpoints
-rw-r--r--    1 root     root          2534 Jan 16 03:41 config.v2.json
-rw-r--r--    1 root     root          1168 Jan 16 03:41 hostconfig.json
-rw-r--r--    1 root     root            13 Jan 16 03:38 hostname
-rw-r--r--    1 root     root           174 Jan 16 03:38 hosts
-rw-r--r--    1 root     root            75 Jan 16 03:38 resolv.conf
-rw-r--r--    1 root     root            71 Jan 16 03:38 resolv.conf.hash
drwx------    2 root     root             6 Jan 16 03:38 shm

step 3: Extract the first timestamp from the oldest log file (2017-01-16T03:40:09.719852846Z)

$ vi 5e597048f230ed657baa51cb70d4ea087257e402f3a0340d480dfbb840514d62-json.log.4.gz
    1 {"log":"Mon Jan 16 03:40:09 UTC 2017\n","stream":"stdout","time":"2017-01-16T03:40:09.719852846Z"}
    2 {"log":"Mon Jan 16 03:40:09 UTC 2017\n","stream":"stdout","time":"2017-01-16T03:40:09.73350815Z"}
    3 {"log":"Mon Jan 16 03:40:09 UTC 2017\n","stream":"stdout","time":"2017-01-16T03:40:09.733516097Z"}
    4 {"log":"Mon Jan 16 03:40:09 UTC 2017\n","stream":"stdout","time":"2017-01-16T03:40:09.733520285Z"}
    5 {"log":"Mon Jan 16 03:40:09 UTC 2017\n","stream":"stdout","time":"2017-01-16T03:40:09.733524048Z"}
	   ... ...

step 4: run docker logs

$ docker logs -t -f --since "2017-01-16T03:40:09.719852846Z" 5e >> test.log

step 5: Interrupt the docker logs process and check the contents of test.log. We can find all the logs since 2017-01-16T03:40:09.719852846Z

vi test.log
    1 2017-01-16T03:40:09.719852846Z Mon Jan 16 03:40:09 UTC 2017
    2 2017-01-16T03:40:09.733508150Z Mon Jan 16 03:40:09 UTC 2017
    3 2017-01-16T03:40:09.733516097Z Mon Jan 16 03:40:09 UTC 2017
	  ... ...
	  ... ...
41889 2017-01-16T03:41:18.444189279Z Mon Jan 16 03:41:18 UTC 2017
41890 2017-01-16T03:41:18.445763848Z Mon Jan 16 03:41:18 UTC 2017
41891 2017-01-16T03:41:18.447345408Z Mon Jan 16 03:41:18 UTC 2017
41892 2017-01-16T03:41:18.448965201Z Mon Jan 16 03:41:18 UTC 2017

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 27, 2017

The design looks good. Let's review.
ping @cpuguy83 @AkihiroSuda

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kinda looks dangerous. You might read too much into memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can replace this with a temporary file, but it may reduce efficiency.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have to, or implement a seeker for a compressed stream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LK4D4 @cpuguy83 I have implemented a seeker for a compressed stream. PTAL,
If there is anything wrong, please point out the right direction, thanks so much!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be io.Copy instead of read everything in memory

Copy link
Contributor Author

@miaoyq miaoyq Feb 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LK4D4 @cpuguy83 updated. PTAL

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to support a bunch of formats?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not relying on the archive package would definitely be a good thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you, but @AkihiroSuda has different opinions:

I wonder this should be a string (e.g. "gzip") for extensibility

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to not have a flag and just compress... do you see why people would not want this?

I guess there's people violating the rule of everything in /var/lib/docker should be private to the engine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But they should be punished.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 lashings each.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we compress the log file by default? @cpuguy83 @LK4D4

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussing with others, we decided to not compress by default in this driver as it could break people who aren't expecting it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to read this in here. This should stream to the compressed writer.

@miaoyq
Copy link
Contributor Author

miaoyq commented Mar 13, 2018

@cpuguy83 Thanks a lot, rebased.

@thaJeztah
Copy link
Member

to other maintainers; having a quick look at this one; don't merge yet 😅

Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment about naming of the option (compression -> compress), and some thoughs

defer os.RemoveAll(tmp)
filename := filepath.Join(tmp, "container.log")
config := map[string]string{"max-file": "2", "max-size": "1k"}
config := map[string]string{"max-file": "3", "max-size": "1k", "compression": "true"}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before we merge, I'd like to have one final change: can you rename the compression option to compress? (because it no longer takes the name of a compression, but became a boolean)

marshal logger.MarshalFunc
createDecoder makeDecoderFunc
perms os.FileMode
logPath string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could probably be named just "path"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, given that LogFile.LogPath() seems to be unused, it looks like it's only used in checkCapacityAndRotate(). Can't we just use f.Name()?

Something like:

diff --git a/daemon/logger/loggerutils/logfile.go b/daemon/logger/loggerutils/logfile.go
index e0f21f46c..428f4786b 100644
--- a/daemon/logger/loggerutils/logfile.go
+++ b/daemon/logger/loggerutils/logfile.go
@@ -79,7 +79,6 @@ func (rc *refCounter) Dereference(fileName string) error {
 
 // LogFile is Logger implementation for default Docker logging.
 type LogFile struct {
-	logPath         string
 	mu              sync.RWMutex // protects the logfile access
 	f               *os.File     // store for closing
 	closed          bool
@@ -111,7 +110,6 @@ func NewLogFile(logPath string, capacity int64, maxFiles int, compress bool, mar
 	}
 
 	return &LogFile{
-		logPath:         logPath,
 		f:               log,
 		capacity:        capacity,
 		currentSize:     size,
@@ -162,15 +160,16 @@ func (w *LogFile) checkCapacityAndRotate() error {
 
 	if w.currentSize >= w.capacity {
 		w.rotateMu.Lock()
+		fname := w.f.Name()
 		if err := w.f.Close(); err != nil {
 			w.rotateMu.Unlock()
 			return errors.Wrap(err, "error closing file")
 		}
-		if err := rotate(w.logPath, w.maxFiles, w.compress); err != nil {
+		if err := rotate(fname, w.maxFiles, w.compress); err != nil {
 			w.rotateMu.Unlock()
 			return err
 		}
-		file, err := os.OpenFile(w.logPath, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, w.perms)
+		file, err := os.OpenFile(fname, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, w.perms)
 		if err != nil {
 			w.rotateMu.Unlock()
 			return err
@@ -185,7 +184,7 @@ func (w *LogFile) checkCapacityAndRotate() error {
 		}
 
 		go func() {
-			compressFile(w.logPath+".1", w.lastTimestamp)
+			compressFile(fname+".1", w.lastTimestamp)
 			w.rotateMu.Unlock()
 		}()
 	}
@@ -262,11 +261,6 @@ func compressFile(fileName string, lastTimestamp time.Time) {
 	}
 }
 
-// LogPath returns the location the given writer logs to.
-func (w *LogFile) LogPath() string {
-	return w.logPath
-}
-
 // MaxFiles return maximum number of files
 func (w *LogFile) MaxFiles() int {
 	return w.maxFiles

Or possibly even:

diff --git a/daemon/logger/loggerutils/logfile.go b/daemon/logger/loggerutils/logfile.go
index e0f21f46c..5ad023423 100644
--- a/daemon/logger/loggerutils/logfile.go
+++ b/daemon/logger/loggerutils/logfile.go
@@ -79,7 +79,6 @@ func (rc *refCounter) Dereference(fileName string) error {
 
 // LogFile is Logger implementation for default Docker logging.
 type LogFile struct {
-	logPath         string
 	mu              sync.RWMutex // protects the logfile access
 	f               *os.File     // store for closing
 	closed          bool
@@ -111,7 +110,6 @@ func NewLogFile(logPath string, capacity int64, maxFiles int, compress bool, mar
 	}
 
 	return &LogFile{
-		logPath:         logPath,
 		f:               log,
 		capacity:        capacity,
 		currentSize:     size,
@@ -166,11 +164,11 @@ func (w *LogFile) checkCapacityAndRotate() error {
 			w.rotateMu.Unlock()
 			return errors.Wrap(err, "error closing file")
 		}
-		if err := rotate(w.logPath, w.maxFiles, w.compress); err != nil {
+		if err := rotate(w.f.Name(), w.maxFiles, w.compress); err != nil {
 			w.rotateMu.Unlock()
 			return err
 		}
-		file, err := os.OpenFile(w.logPath, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, w.perms)
+		file, err := os.OpenFile(w.f.Name(), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, w.perms)
 		if err != nil {
 			w.rotateMu.Unlock()
 			return err
@@ -185,7 +183,7 @@ func (w *LogFile) checkCapacityAndRotate() error {
 		}
 
 		go func() {
-			compressFile(w.logPath+".1", w.lastTimestamp)
+			compressFile(w.f.Name()+".1", w.lastTimestamp)
 			w.rotateMu.Unlock()
 		}()
 	}
@@ -262,11 +260,6 @@ func compressFile(fileName string, lastTimestamp time.Time) {
 	}
 }
 
-// LogPath returns the location the given writer logs to.
-func (w *LogFile) LogPath() string {
-	return w.logPath
-}
-
 // MaxFiles return maximum number of files
 func (w *LogFile) MaxFiles() int {
 	return w.maxFiles

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's nice to store it because w.f.Name() can be racey (since w.f will change) unless locking is involved meanwhile the actual path never changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I was looking at that, seems the only place LogFile.f is written to is during construction in NewLogFile(), and in checkCapacityAndRotate(), and the filename never actually changes (it's always /var/lib/docker/containers/<container-id>/<container-id>-json.log).

Perhaps the first option, using an intermediate variable (fname := w.f.Name()) would work then.

Copy link
Contributor Author

@miaoyq miaoyq Mar 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the first option, using an intermediate variable (fname := w.f.Name()) would work then.

I think what @thaJeztah said makes sense if LogPath() is removed. @cpuguy83 WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works, but still need to lock to get the file name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cpuguy83 Yeah, we can get the file name after w.rotateMu.Lock().

return nil
}

extension := ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could be

var extension string

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do.


//NewLogFile creates new LogFile
func NewLogFile(logPath string, capacity int64, maxFiles int, marshaller logger.MarshalFunc, decodeFunc makeDecoderFunc, perms os.FileMode) (*LogFile, error) {
func NewLogFile(logPath string, capacity int64, maxFiles int, compress bool, marshaller logger.MarshalFunc, decodeFunc makeDecoderFunc, perms os.FileMode) (*LogFile, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not always a fan of boolean arguments; was thinking if this should take the name of an actual compression algorithm (none/gzip/flate), or a compression function (similar to decodeFunc) but not sure we'll be adding other compressions in future, so guess this is okay

}

// LogPath returns the location the given writer logs to.
func (w *LogFile) LogPath() string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem to be used anywhere, correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I guess this isn't used anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove this.

}

go func() {
compressFile(w.logPath+".1", w.lastTimestamp)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, only time this could be a problem is if the next rotation takes place before compression is completed; not sure if that's a real issue, so just thinking out loud. I did some extreme test;

diff --git a/daemon/logger/loggerutils/logfile.go b/daemon/logger/loggerutils/logfile.go
index e0f21f46c..40bbba485 100644
--- a/daemon/logger/loggerutils/logfile.go
+++ b/daemon/logger/loggerutils/logfile.go
@@ -254,7 +254,7 @@ func compressFile(fileName string, lastTimestamp time.Time) {
                // Here log the error only and don't return since this is just an optimization.
                logrus.Warningf("Failed to marshal JSON: %v", err)
        }
-
+       time.Sleep(50 * time.Second)
        _, err = pools.Copy(compressWriter, file)
        if err != nil {
                logrus.WithError(err).WithField("module", "container.logs").WithField("file", fileName).Error("Error compressing log file")

Running with that, rotation is waiting for compression to complete, and killing the container became problematic, which is likely expected, just thinking if there would be real-world situations where that would happen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this should be expected. There's possibly more we can do to prevent blocking on killing the container, but there's already a lot going on with locking for this case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, it was probably a bit extreme, just had my "QA hat" on, and seeing what possible things could be problematic. Don't think it should be a blocker for this feature.

@thaJeztah
Copy link
Member

ping @cpuguy83 could you look at #29932 (comment) if you're ok with changing it to that?

@miaoyq miaoyq force-pushed the container-log-add-archive branch from 7448d23 to 61a046e Compare March 15, 2018 12:09
@miaoyq
Copy link
Contributor Author

miaoyq commented Mar 15, 2018

@cpuguy83 @thaJeztah Updated.

This PR adds support for compressibility of log file.
I added a new option conpression for the jsonfile log driver,
this option allows the user to specify compression algorithm to
compress the log files. By default, the log files will be
not compressed. At present, only support 'gzip'.

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

'docker logs' can read from compressed files

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

Add Metadata to the gzip header, optmize 'readlog'

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
Copy link
Member

@cpuguy83 cpuguy83 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!!

thanks for this, I know it took a while 😅

@thaJeztah
Copy link
Member

hm, looks like this is a flaky test on z;

19:26:10 ----------------------------------------------------------------------
19:26:10 FAIL: docker_api_attach_test.go:98: DockerSuite.TestPostContainersAttach
19:26:10 
19:26:10 docker_api_attach_test.go:211:
19:26:10     c.Assert(actualStdout.Bytes(), checker.DeepEquals, []byte("hello\nsuccess"), check.Commentf("Attach didn't return the expected data from stdout"))
19:26:10 ... obtained []uint8 = []byte{0x73, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73}
19:26:10 ... expected []uint8 = []byte{0x68, 0x65, 0x6c, 0x6c, 0x6f, 0xa, 0x73, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73}
19:26:10 ... Attach didn't return the expected data from stdout
19:26:10 

@miaoyq
Copy link
Contributor Author

miaoyq commented Mar 16, 2018

Thanks @cpuguy83 @thaJeztah 😄

@thaJeztah
Copy link
Member

argh, and now it failed on another flaky test (#36551 / #36547)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants