Re: Proposed Standard requirements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,
At 09:11 06-03-2014, Dave Cridland wrote:
On 6 March 2014 16:51, Barry Leiba <barryleiba@xxxxxxxxxxxx> wrote:
> My point here is that until we have interoperable implementations on both
> client and server and have caught and fixed any issues in the spec, we
> shouldn't advance the status.

Why not?  Interoperable implementations aren't required for Proposed
Standard.  Internet Standard, sure, but not PS.  The requirement for
PS is that we think we got it right.


So this made sense when Proposed Standard actually meant Proposed, rather than Standard.

Quoting Olaf, Sean and Scott:

  "A Proposed Standard specification is stable, has resolved known
   design choices, has received significant community review, and
   appears to enjoy enough community interest to be considered valuable.

   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable and will
   usually represent a strong argument in favor of a Proposed Standard
   designation."

But now that Proposed Standard actually means Draft, because (to paraphrase Scott Bradner), we've left-shifted our standards process, why aren't we requiring implementation experience for PS?

There is an IETF Area which requires implementation experience. There can be cases where implementation experience may be needed as you cannot go back if you break stuff.

We've tightened up review a far bit, and raised the bar for spec quality in general, but since PS is now DS by any other name, it's had the effect of side-stepping the "two independent interoperable implementation" requirement.

I would like to see more implementation experience -- it's be great to
see.  That's why I asked whether there is any.  But to me, the gating
factor is more about whether there's *interest* in implementing it.

Wouldn't it be great to require? Running code and all that, after all...

"Two independent implementations" is not running code. Running code is what is deployed out there. There is likely a thread in the YAM archives about "widely deployed". Lab tests do not unearth problems which can be encountered in the wild.

In terms of compliance, do you test the implementations against RFC 821 or RFC 5321? Please note that I have looked into what is being required out there.

There is the gating factor. There is a BCP about documenting implementation experience which is now in an unknown state (the reader would have to do a thorough read of BCP 9 to find out why). There is a thread of (IRTF) CFRG relevant to "Proposed Standards ( http://www.ietf.org/mail-archive/web/cfrg/current/msg04327.html ). This is where a person might wonder what the gating factor is in practice.

I am biased when I participate in a discussion about a Proposed Standard, i.e. I know the background; I know the finer points raised in the discussions but which are not in the Proposed Standard. I take these finer points into consideration as I implement the specification (before the specification becomes a RFC). I find it difficult to assess the readability of an intended Proposed Standard. It matters as the specification will have to be readable by someone out there who will be implementing it.

Regards,
S. Moonesamy




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]